00:00:00.001 Started by upstream project "autotest-nightly" build number 4369 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3732 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.155 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.156 The recommended git tool is: git 00:00:00.156 using credential 00000000-0000-0000-0000-000000000002 00:00:00.158 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.214 Fetching changes from the remote Git repository 00:00:00.216 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.268 Using shallow fetch with depth 1 00:00:00.268 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.268 > git --version # timeout=10 00:00:00.300 > git --version # 'git version 2.39.2' 00:00:00.300 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.321 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.321 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.202 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.215 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.227 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.227 > git config core.sparsecheckout # timeout=10 00:00:07.238 > git read-tree -mu HEAD # timeout=10 00:00:07.253 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.270 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.270 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.355 [Pipeline] Start of Pipeline 00:00:07.366 [Pipeline] library 00:00:07.367 Loading library shm_lib@master 00:00:07.367 Library shm_lib@master is cached. Copying from home. 00:00:07.381 [Pipeline] node 00:00:07.396 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.397 [Pipeline] { 00:00:07.405 [Pipeline] catchError 00:00:07.406 [Pipeline] { 00:00:07.416 [Pipeline] wrap 00:00:07.425 [Pipeline] { 00:00:07.430 [Pipeline] stage 00:00:07.431 [Pipeline] { (Prologue) 00:00:07.447 [Pipeline] echo 00:00:07.449 Node: VM-host-SM38 00:00:07.455 [Pipeline] cleanWs 00:00:07.465 [WS-CLEANUP] Deleting project workspace... 00:00:07.465 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.472 [WS-CLEANUP] done 00:00:07.666 [Pipeline] setCustomBuildProperty 00:00:07.734 [Pipeline] httpRequest 00:00:08.035 [Pipeline] echo 00:00:08.037 Sorcerer 10.211.164.20 is alive 00:00:08.046 [Pipeline] retry 00:00:08.048 [Pipeline] { 00:00:08.059 [Pipeline] httpRequest 00:00:08.063 HttpMethod: GET 00:00:08.064 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.065 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.083 Response Code: HTTP/1.1 200 OK 00:00:08.084 Success: Status code 200 is in the accepted range: 200,404 00:00:08.084 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:34.989 [Pipeline] } 00:00:35.006 [Pipeline] // retry 00:00:35.014 [Pipeline] sh 00:00:35.301 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:35.319 [Pipeline] httpRequest 00:00:35.711 [Pipeline] echo 00:00:35.713 Sorcerer 10.211.164.20 is alive 00:00:35.723 [Pipeline] retry 00:00:35.725 [Pipeline] { 00:00:35.740 [Pipeline] httpRequest 00:00:35.745 HttpMethod: GET 00:00:35.746 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:35.746 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:35.762 Response Code: HTTP/1.1 200 OK 00:00:35.762 Success: Status code 200 is in the accepted range: 200,404 00:00:35.763 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:23.822 [Pipeline] } 00:01:23.840 [Pipeline] // retry 00:01:23.849 [Pipeline] sh 00:01:24.128 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:27.483 [Pipeline] sh 00:01:27.767 + git -C spdk log --oneline -n5 00:01:27.767 e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:27.767 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:27.767 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:27.767 66289a6db build: use VERSION file for storing version 00:01:27.767 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:27.786 [Pipeline] writeFile 00:01:27.800 [Pipeline] sh 00:01:28.088 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:28.102 [Pipeline] sh 00:01:28.387 + cat autorun-spdk.conf 00:01:28.388 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.388 SPDK_TEST_NVME=1 00:01:28.388 SPDK_TEST_FTL=1 00:01:28.388 SPDK_TEST_ISAL=1 00:01:28.388 SPDK_RUN_ASAN=1 00:01:28.388 SPDK_RUN_UBSAN=1 00:01:28.388 SPDK_TEST_XNVME=1 00:01:28.388 SPDK_TEST_NVME_FDP=1 00:01:28.388 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:28.397 RUN_NIGHTLY=1 00:01:28.399 [Pipeline] } 00:01:28.412 [Pipeline] // stage 00:01:28.426 [Pipeline] stage 00:01:28.428 [Pipeline] { (Run VM) 00:01:28.440 [Pipeline] sh 00:01:28.725 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:28.725 + echo 'Start stage prepare_nvme.sh' 00:01:28.725 Start stage prepare_nvme.sh 00:01:28.725 + [[ -n 10 ]] 00:01:28.725 + disk_prefix=ex10 00:01:28.725 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:28.725 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:28.725 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:28.725 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.725 ++ SPDK_TEST_NVME=1 00:01:28.725 ++ SPDK_TEST_FTL=1 00:01:28.725 ++ SPDK_TEST_ISAL=1 00:01:28.725 ++ SPDK_RUN_ASAN=1 00:01:28.725 ++ SPDK_RUN_UBSAN=1 00:01:28.725 ++ SPDK_TEST_XNVME=1 00:01:28.725 ++ SPDK_TEST_NVME_FDP=1 00:01:28.725 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:28.725 ++ RUN_NIGHTLY=1 00:01:28.725 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:28.725 + nvme_files=() 00:01:28.725 + declare -A nvme_files 00:01:28.725 + backend_dir=/var/lib/libvirt/images/backends 00:01:28.725 + nvme_files['nvme.img']=5G 00:01:28.725 + nvme_files['nvme-cmb.img']=5G 00:01:28.725 + nvme_files['nvme-multi0.img']=4G 00:01:28.725 + nvme_files['nvme-multi1.img']=4G 00:01:28.725 + nvme_files['nvme-multi2.img']=4G 00:01:28.725 + nvme_files['nvme-openstack.img']=8G 00:01:28.725 + nvme_files['nvme-zns.img']=5G 00:01:28.725 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:28.725 + (( SPDK_TEST_FTL == 1 )) 00:01:28.725 + nvme_files["nvme-ftl.img"]=6G 00:01:28.725 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:28.725 + nvme_files["nvme-fdp.img"]=1G 00:01:28.725 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:28.725 + for nvme in "${!nvme_files[@]}" 00:01:28.725 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:01:28.725 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:28.725 + for nvme in "${!nvme_files[@]}" 00:01:28.725 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-ftl.img -s 6G 00:01:28.725 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:28.725 + for nvme in "${!nvme_files[@]}" 00:01:28.725 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:01:28.725 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:28.726 + for nvme in "${!nvme_files[@]}" 00:01:28.726 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:01:28.726 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:28.726 + for nvme in "${!nvme_files[@]}" 00:01:28.726 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:01:29.298 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:29.298 + for nvme in "${!nvme_files[@]}" 00:01:29.298 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:01:29.298 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:29.298 + for nvme in "${!nvme_files[@]}" 00:01:29.298 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:01:29.298 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:29.298 + for nvme in "${!nvme_files[@]}" 00:01:29.298 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-fdp.img -s 1G 00:01:29.298 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:29.298 + for nvme in "${!nvme_files[@]}" 00:01:29.298 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:01:30.243 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:30.243 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:01:30.243 + echo 'End stage prepare_nvme.sh' 00:01:30.243 End stage prepare_nvme.sh 00:01:30.256 [Pipeline] sh 00:01:30.539 + DISTRO=fedora39 00:01:30.539 + CPUS=10 00:01:30.539 + RAM=12288 00:01:30.539 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:30.539 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex10-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:30.539 00:01:30.539 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:30.539 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:30.539 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:30.539 HELP=0 00:01:30.539 DRY_RUN=0 00:01:30.539 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,/var/lib/libvirt/images/backends/ex10-nvme-fdp.img, 00:01:30.539 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:30.539 NVME_AUTO_CREATE=0 00:01:30.539 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,, 00:01:30.539 NVME_CMB=,,,, 00:01:30.539 NVME_PMR=,,,, 00:01:30.539 NVME_ZNS=,,,, 00:01:30.539 NVME_MS=true,,,, 00:01:30.539 NVME_FDP=,,,on, 00:01:30.539 SPDK_VAGRANT_DISTRO=fedora39 00:01:30.539 SPDK_VAGRANT_VMCPU=10 00:01:30.539 SPDK_VAGRANT_VMRAM=12288 00:01:30.539 SPDK_VAGRANT_PROVIDER=libvirt 00:01:30.539 SPDK_VAGRANT_HTTP_PROXY= 00:01:30.539 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:30.539 SPDK_OPENSTACK_NETWORK=0 00:01:30.539 VAGRANT_PACKAGE_BOX=0 00:01:30.539 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:30.539 FORCE_DISTRO=true 00:01:30.539 VAGRANT_BOX_VERSION= 00:01:30.539 EXTRA_VAGRANTFILES= 00:01:30.539 NIC_MODEL=e1000 00:01:30.539 00:01:30.539 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:30.539 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:33.082 Bringing machine 'default' up with 'libvirt' provider... 00:01:33.344 ==> default: Creating image (snapshot of base box volume). 00:01:33.344 ==> default: Creating domain with the following settings... 00:01:33.344 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734375797_1e1b3d467a22d3aa1c20 00:01:33.344 ==> default: -- Domain type: kvm 00:01:33.344 ==> default: -- Cpus: 10 00:01:33.344 ==> default: -- Feature: acpi 00:01:33.344 ==> default: -- Feature: apic 00:01:33.344 ==> default: -- Feature: pae 00:01:33.344 ==> default: -- Memory: 12288M 00:01:33.344 ==> default: -- Memory Backing: hugepages: 00:01:33.344 ==> default: -- Management MAC: 00:01:33.344 ==> default: -- Loader: 00:01:33.344 ==> default: -- Nvram: 00:01:33.344 ==> default: -- Base box: spdk/fedora39 00:01:33.344 ==> default: -- Storage pool: default 00:01:33.344 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734375797_1e1b3d467a22d3aa1c20.img (20G) 00:01:33.344 ==> default: -- Volume Cache: default 00:01:33.344 ==> default: -- Kernel: 00:01:33.344 ==> default: -- Initrd: 00:01:33.344 ==> default: -- Graphics Type: vnc 00:01:33.344 ==> default: -- Graphics Port: -1 00:01:33.344 ==> default: -- Graphics IP: 127.0.0.1 00:01:33.344 ==> default: -- Graphics Password: Not defined 00:01:33.344 ==> default: -- Video Type: cirrus 00:01:33.344 ==> default: -- Video VRAM: 9216 00:01:33.344 ==> default: -- Sound Type: 00:01:33.344 ==> default: -- Keymap: en-us 00:01:33.344 ==> default: -- TPM Path: 00:01:33.344 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:33.344 ==> default: -- Command line args: 00:01:33.344 ==> default: -> value=-device, 00:01:33.344 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:33.344 ==> default: -> value=-drive, 00:01:33.344 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:33.344 ==> default: -> value=-device, 00:01:33.344 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:33.344 ==> default: -> value=-device, 00:01:33.344 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:33.344 ==> default: -> value=-drive, 00:01:33.344 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-1-drive0, 00:01:33.344 ==> default: -> value=-device, 00:01:33.344 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:33.344 ==> default: -> value=-device, 00:01:33.344 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:33.344 ==> default: -> value=-drive, 00:01:33.344 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:33.344 ==> default: -> value=-device, 00:01:33.344 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:33.344 ==> default: -> value=-drive, 00:01:33.344 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:33.344 ==> default: -> value=-device, 00:01:33.344 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:33.344 ==> default: -> value=-drive, 00:01:33.344 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:33.344 ==> default: -> value=-device, 00:01:33.344 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:33.344 ==> default: -> value=-device, 00:01:33.344 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:33.344 ==> default: -> value=-device, 00:01:33.344 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:33.344 ==> default: -> value=-drive, 00:01:33.344 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:33.344 ==> default: -> value=-device, 00:01:33.344 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:33.605 ==> default: Creating shared folders metadata... 00:01:33.605 ==> default: Starting domain. 00:01:35.513 ==> default: Waiting for domain to get an IP address... 00:01:53.631 ==> default: Waiting for SSH to become available... 00:01:53.631 ==> default: Configuring and enabling network interfaces... 00:01:56.932 default: SSH address: 192.168.121.61:22 00:01:56.932 default: SSH username: vagrant 00:01:56.932 default: SSH auth method: private key 00:01:58.881 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:07.022 ==> default: Mounting SSHFS shared folder... 00:02:08.408 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:08.408 ==> default: Checking Mount.. 00:02:09.793 ==> default: Folder Successfully Mounted! 00:02:09.793 00:02:09.793 SUCCESS! 00:02:09.793 00:02:09.793 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:09.793 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:09.793 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:09.793 00:02:09.803 [Pipeline] } 00:02:09.817 [Pipeline] // stage 00:02:09.826 [Pipeline] dir 00:02:09.826 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:09.828 [Pipeline] { 00:02:09.840 [Pipeline] catchError 00:02:09.842 [Pipeline] { 00:02:09.854 [Pipeline] sh 00:02:10.138 + vagrant ssh-config --host vagrant 00:02:10.138 + sed -ne '/^Host/,$p' 00:02:10.138 + tee ssh_conf 00:02:12.709 Host vagrant 00:02:12.709 HostName 192.168.121.61 00:02:12.709 User vagrant 00:02:12.709 Port 22 00:02:12.709 UserKnownHostsFile /dev/null 00:02:12.709 StrictHostKeyChecking no 00:02:12.709 PasswordAuthentication no 00:02:12.709 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:12.709 IdentitiesOnly yes 00:02:12.709 LogLevel FATAL 00:02:12.709 ForwardAgent yes 00:02:12.709 ForwardX11 yes 00:02:12.709 00:02:12.724 [Pipeline] withEnv 00:02:12.726 [Pipeline] { 00:02:12.739 [Pipeline] sh 00:02:13.024 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:13.024 source /etc/os-release 00:02:13.024 [[ -e /image.version ]] && img=$(< /image.version) 00:02:13.024 # Minimal, systemd-like check. 00:02:13.024 if [[ -e /.dockerenv ]]; then 00:02:13.024 # Clear garbage from the node'\''s name: 00:02:13.024 # agt-er_autotest_547-896 -> autotest_547-896 00:02:13.024 # $HOSTNAME is the actual container id 00:02:13.024 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:13.024 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:13.024 # We can assume this is a mount from a host where container is running, 00:02:13.024 # so fetch its hostname to easily identify the target swarm worker. 00:02:13.024 container="$(< /etc/hostname) ($agent)" 00:02:13.024 else 00:02:13.024 # Fallback 00:02:13.024 container=$agent 00:02:13.024 fi 00:02:13.024 fi 00:02:13.024 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:13.024 ' 00:02:13.299 [Pipeline] } 00:02:13.315 [Pipeline] // withEnv 00:02:13.323 [Pipeline] setCustomBuildProperty 00:02:13.337 [Pipeline] stage 00:02:13.339 [Pipeline] { (Tests) 00:02:13.355 [Pipeline] sh 00:02:13.639 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:13.913 [Pipeline] sh 00:02:14.195 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:14.472 [Pipeline] timeout 00:02:14.472 Timeout set to expire in 50 min 00:02:14.474 [Pipeline] { 00:02:14.487 [Pipeline] sh 00:02:14.769 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:15.340 HEAD is now at e01cb43b8 mk/spdk.common.mk sed the minor version 00:02:15.353 [Pipeline] sh 00:02:15.637 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:15.912 [Pipeline] sh 00:02:16.196 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:16.474 [Pipeline] sh 00:02:16.776 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:17.038 ++ readlink -f spdk_repo 00:02:17.038 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:17.038 + [[ -n /home/vagrant/spdk_repo ]] 00:02:17.038 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:17.038 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:17.038 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:17.038 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:17.038 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:17.038 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:17.038 + cd /home/vagrant/spdk_repo 00:02:17.038 + source /etc/os-release 00:02:17.038 ++ NAME='Fedora Linux' 00:02:17.038 ++ VERSION='39 (Cloud Edition)' 00:02:17.038 ++ ID=fedora 00:02:17.038 ++ VERSION_ID=39 00:02:17.038 ++ VERSION_CODENAME= 00:02:17.038 ++ PLATFORM_ID=platform:f39 00:02:17.038 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:17.038 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:17.038 ++ LOGO=fedora-logo-icon 00:02:17.038 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:17.038 ++ HOME_URL=https://fedoraproject.org/ 00:02:17.038 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:17.038 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:17.038 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:17.038 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:17.038 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:17.038 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:17.038 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:17.038 ++ SUPPORT_END=2024-11-12 00:02:17.038 ++ VARIANT='Cloud Edition' 00:02:17.038 ++ VARIANT_ID=cloud 00:02:17.038 + uname -a 00:02:17.038 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:17.038 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:17.300 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:17.562 Hugepages 00:02:17.562 node hugesize free / total 00:02:17.562 node0 1048576kB 0 / 0 00:02:17.562 node0 2048kB 0 / 0 00:02:17.562 00:02:17.562 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:17.562 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:17.562 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:17.562 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:17.562 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:17.822 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:17.822 + rm -f /tmp/spdk-ld-path 00:02:17.822 + source autorun-spdk.conf 00:02:17.822 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:17.822 ++ SPDK_TEST_NVME=1 00:02:17.822 ++ SPDK_TEST_FTL=1 00:02:17.822 ++ SPDK_TEST_ISAL=1 00:02:17.822 ++ SPDK_RUN_ASAN=1 00:02:17.822 ++ SPDK_RUN_UBSAN=1 00:02:17.822 ++ SPDK_TEST_XNVME=1 00:02:17.822 ++ SPDK_TEST_NVME_FDP=1 00:02:17.822 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:17.822 ++ RUN_NIGHTLY=1 00:02:17.822 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:17.822 + [[ -n '' ]] 00:02:17.822 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:17.822 + for M in /var/spdk/build-*-manifest.txt 00:02:17.822 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:17.822 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:17.822 + for M in /var/spdk/build-*-manifest.txt 00:02:17.822 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:17.822 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:17.822 + for M in /var/spdk/build-*-manifest.txt 00:02:17.822 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:17.822 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:17.822 ++ uname 00:02:17.822 + [[ Linux == \L\i\n\u\x ]] 00:02:17.822 + sudo dmesg -T 00:02:17.822 + sudo dmesg --clear 00:02:17.822 + dmesg_pid=5029 00:02:17.822 + [[ Fedora Linux == FreeBSD ]] 00:02:17.822 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:17.822 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:17.822 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:17.822 + [[ -x /usr/src/fio-static/fio ]] 00:02:17.822 + sudo dmesg -Tw 00:02:17.822 + export FIO_BIN=/usr/src/fio-static/fio 00:02:17.822 + FIO_BIN=/usr/src/fio-static/fio 00:02:17.822 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:17.822 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:17.822 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:17.822 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:17.822 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:17.822 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:17.822 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:17.822 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:17.822 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:17.822 19:04:02 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:17.822 19:04:02 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:17.822 19:04:02 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:17.822 19:04:02 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:17.822 19:04:02 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:17.822 19:04:02 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:17.822 19:04:02 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:17.822 19:04:02 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:17.822 19:04:02 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:17.822 19:04:02 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:17.822 19:04:02 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:17.822 19:04:02 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:02:17.822 19:04:02 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:17.822 19:04:02 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:18.081 19:04:02 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:18.081 19:04:02 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:18.081 19:04:02 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:18.081 19:04:02 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:18.081 19:04:02 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:18.081 19:04:02 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:18.081 19:04:02 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.081 19:04:02 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.081 19:04:02 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.081 19:04:02 -- paths/export.sh@5 -- $ export PATH 00:02:18.081 19:04:02 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.081 19:04:02 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:18.081 19:04:02 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:18.081 19:04:02 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734375842.XXXXXX 00:02:18.081 19:04:02 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734375842.Jhy0pV 00:02:18.081 19:04:02 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:18.081 19:04:02 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:18.081 19:04:02 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:18.081 19:04:02 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:18.081 19:04:02 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:18.081 19:04:02 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:18.081 19:04:02 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:18.081 19:04:02 -- common/autotest_common.sh@10 -- $ set +x 00:02:18.081 19:04:02 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:18.081 19:04:02 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:18.081 19:04:02 -- pm/common@17 -- $ local monitor 00:02:18.081 19:04:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.081 19:04:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.081 19:04:02 -- pm/common@25 -- $ sleep 1 00:02:18.081 19:04:02 -- pm/common@21 -- $ date +%s 00:02:18.081 19:04:02 -- pm/common@21 -- $ date +%s 00:02:18.081 19:04:02 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734375842 00:02:18.081 19:04:02 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734375842 00:02:18.081 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734375842_collect-vmstat.pm.log 00:02:18.081 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734375842_collect-cpu-load.pm.log 00:02:19.025 19:04:03 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:19.025 19:04:03 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:19.025 19:04:03 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:19.025 19:04:03 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:19.025 19:04:03 -- spdk/autobuild.sh@16 -- $ date -u 00:02:19.025 Mon Dec 16 07:04:03 PM UTC 2024 00:02:19.025 19:04:03 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:19.025 v25.01-rc1-2-ge01cb43b8 00:02:19.025 19:04:03 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:19.026 19:04:03 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:19.026 19:04:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:19.026 19:04:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:19.026 19:04:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.026 ************************************ 00:02:19.026 START TEST asan 00:02:19.026 ************************************ 00:02:19.026 using asan 00:02:19.026 19:04:03 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:19.026 00:02:19.026 real 0m0.000s 00:02:19.026 user 0m0.000s 00:02:19.026 sys 0m0.000s 00:02:19.026 19:04:03 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:19.026 ************************************ 00:02:19.026 END TEST asan 00:02:19.026 ************************************ 00:02:19.026 19:04:03 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:19.026 19:04:03 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:19.026 19:04:03 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:19.026 19:04:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:19.026 19:04:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:19.026 19:04:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.026 ************************************ 00:02:19.026 START TEST ubsan 00:02:19.026 ************************************ 00:02:19.026 using ubsan 00:02:19.026 19:04:03 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:19.026 00:02:19.026 real 0m0.000s 00:02:19.026 user 0m0.000s 00:02:19.026 sys 0m0.000s 00:02:19.026 ************************************ 00:02:19.026 END TEST ubsan 00:02:19.026 19:04:03 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:19.026 19:04:03 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:19.026 ************************************ 00:02:19.286 19:04:03 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:19.286 19:04:03 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:19.286 19:04:03 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:19.286 19:04:03 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:19.286 19:04:03 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:19.286 19:04:03 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:19.286 19:04:03 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:19.286 19:04:03 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:19.286 19:04:03 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:19.287 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:19.287 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:19.548 Using 'verbs' RDMA provider 00:02:32.768 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:42.775 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:42.775 Creating mk/config.mk...done. 00:02:42.775 Creating mk/cc.flags.mk...done. 00:02:42.775 Type 'make' to build. 00:02:42.775 19:04:26 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:42.775 19:04:26 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:42.775 19:04:26 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:42.775 19:04:26 -- common/autotest_common.sh@10 -- $ set +x 00:02:42.775 ************************************ 00:02:42.775 START TEST make 00:02:42.775 ************************************ 00:02:42.775 19:04:26 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:42.775 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:42.775 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:42.775 meson setup builddir \ 00:02:42.775 -Dwith-libaio=enabled \ 00:02:42.775 -Dwith-liburing=enabled \ 00:02:42.775 -Dwith-libvfn=disabled \ 00:02:42.775 -Dwith-spdk=disabled \ 00:02:42.775 -Dexamples=false \ 00:02:42.775 -Dtests=false \ 00:02:42.775 -Dtools=false && \ 00:02:42.775 meson compile -C builddir && \ 00:02:42.775 cd -) 00:02:44.687 The Meson build system 00:02:44.687 Version: 1.5.0 00:02:44.687 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:44.687 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:44.687 Build type: native build 00:02:44.687 Project name: xnvme 00:02:44.687 Project version: 0.7.5 00:02:44.687 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:44.687 C linker for the host machine: cc ld.bfd 2.40-14 00:02:44.687 Host machine cpu family: x86_64 00:02:44.687 Host machine cpu: x86_64 00:02:44.687 Message: host_machine.system: linux 00:02:44.687 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:44.687 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:44.687 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:44.687 Run-time dependency threads found: YES 00:02:44.687 Has header "setupapi.h" : NO 00:02:44.687 Has header "linux/blkzoned.h" : YES 00:02:44.687 Has header "linux/blkzoned.h" : YES (cached) 00:02:44.687 Has header "libaio.h" : YES 00:02:44.687 Library aio found: YES 00:02:44.687 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:44.687 Run-time dependency liburing found: YES 2.2 00:02:44.687 Dependency libvfn skipped: feature with-libvfn disabled 00:02:44.687 Found CMake: /usr/bin/cmake (3.27.7) 00:02:44.687 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:44.687 Subproject spdk : skipped: feature with-spdk disabled 00:02:44.687 Run-time dependency appleframeworks found: NO (tried framework) 00:02:44.687 Run-time dependency appleframeworks found: NO (tried framework) 00:02:44.687 Library rt found: YES 00:02:44.687 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:44.687 Configuring xnvme_config.h using configuration 00:02:44.687 Configuring xnvme.spec using configuration 00:02:44.687 Run-time dependency bash-completion found: YES 2.11 00:02:44.687 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:44.687 Program cp found: YES (/usr/bin/cp) 00:02:44.687 Build targets in project: 3 00:02:44.687 00:02:44.687 xnvme 0.7.5 00:02:44.687 00:02:44.687 Subprojects 00:02:44.687 spdk : NO Feature 'with-spdk' disabled 00:02:44.687 00:02:44.687 User defined options 00:02:44.687 examples : false 00:02:44.687 tests : false 00:02:44.687 tools : false 00:02:44.687 with-libaio : enabled 00:02:44.687 with-liburing: enabled 00:02:44.687 with-libvfn : disabled 00:02:44.687 with-spdk : disabled 00:02:44.687 00:02:44.687 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:44.946 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:44.946 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:44.946 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:44.946 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:44.946 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:44.946 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:44.946 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:44.946 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:44.946 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:44.946 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:44.946 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:44.946 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:44.946 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:44.946 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:45.205 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:45.205 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:45.205 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:45.205 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:45.205 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:45.205 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:45.205 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:45.205 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:45.205 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:45.205 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:45.205 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:45.205 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:45.205 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:45.205 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:45.205 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:45.205 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:45.205 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:45.205 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:45.205 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:45.205 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:45.205 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:45.205 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:45.205 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:45.205 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:45.205 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:45.205 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:45.205 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:45.205 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:45.205 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:45.205 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:45.205 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:45.205 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:45.205 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:45.205 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:45.205 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:45.205 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:45.464 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:45.464 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:45.464 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:45.464 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:45.464 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:45.464 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:45.464 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:45.464 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:45.464 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:45.464 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:45.464 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:45.464 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:45.464 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:45.464 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:45.464 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:45.464 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:45.464 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:45.464 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:45.464 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:45.464 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:45.464 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:45.464 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:45.722 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:45.722 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:45.981 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:45.981 [75/76] Linking static target lib/libxnvme.a 00:02:45.981 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:45.981 INFO: autodetecting backend as ninja 00:02:45.981 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:45.981 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:52.550 The Meson build system 00:02:52.550 Version: 1.5.0 00:02:52.550 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:52.550 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:52.550 Build type: native build 00:02:52.550 Program cat found: YES (/usr/bin/cat) 00:02:52.550 Project name: DPDK 00:02:52.550 Project version: 24.03.0 00:02:52.550 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:52.550 C linker for the host machine: cc ld.bfd 2.40-14 00:02:52.550 Host machine cpu family: x86_64 00:02:52.550 Host machine cpu: x86_64 00:02:52.550 Message: ## Building in Developer Mode ## 00:02:52.550 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:52.550 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:52.550 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:52.550 Program python3 found: YES (/usr/bin/python3) 00:02:52.550 Program cat found: YES (/usr/bin/cat) 00:02:52.550 Compiler for C supports arguments -march=native: YES 00:02:52.550 Checking for size of "void *" : 8 00:02:52.550 Checking for size of "void *" : 8 (cached) 00:02:52.550 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:52.550 Library m found: YES 00:02:52.550 Library numa found: YES 00:02:52.550 Has header "numaif.h" : YES 00:02:52.550 Library fdt found: NO 00:02:52.550 Library execinfo found: NO 00:02:52.550 Has header "execinfo.h" : YES 00:02:52.550 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:52.550 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:52.550 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:52.550 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:52.550 Run-time dependency openssl found: YES 3.1.1 00:02:52.550 Run-time dependency libpcap found: YES 1.10.4 00:02:52.550 Has header "pcap.h" with dependency libpcap: YES 00:02:52.550 Compiler for C supports arguments -Wcast-qual: YES 00:02:52.550 Compiler for C supports arguments -Wdeprecated: YES 00:02:52.550 Compiler for C supports arguments -Wformat: YES 00:02:52.550 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:52.550 Compiler for C supports arguments -Wformat-security: NO 00:02:52.550 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:52.550 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:52.550 Compiler for C supports arguments -Wnested-externs: YES 00:02:52.550 Compiler for C supports arguments -Wold-style-definition: YES 00:02:52.550 Compiler for C supports arguments -Wpointer-arith: YES 00:02:52.550 Compiler for C supports arguments -Wsign-compare: YES 00:02:52.550 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:52.550 Compiler for C supports arguments -Wundef: YES 00:02:52.550 Compiler for C supports arguments -Wwrite-strings: YES 00:02:52.550 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:52.550 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:52.550 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:52.550 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:52.550 Program objdump found: YES (/usr/bin/objdump) 00:02:52.550 Compiler for C supports arguments -mavx512f: YES 00:02:52.550 Checking if "AVX512 checking" compiles: YES 00:02:52.550 Fetching value of define "__SSE4_2__" : 1 00:02:52.550 Fetching value of define "__AES__" : 1 00:02:52.550 Fetching value of define "__AVX__" : 1 00:02:52.550 Fetching value of define "__AVX2__" : 1 00:02:52.550 Fetching value of define "__AVX512BW__" : 1 00:02:52.550 Fetching value of define "__AVX512CD__" : 1 00:02:52.550 Fetching value of define "__AVX512DQ__" : 1 00:02:52.550 Fetching value of define "__AVX512F__" : 1 00:02:52.550 Fetching value of define "__AVX512VL__" : 1 00:02:52.550 Fetching value of define "__PCLMUL__" : 1 00:02:52.550 Fetching value of define "__RDRND__" : 1 00:02:52.550 Fetching value of define "__RDSEED__" : 1 00:02:52.550 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:52.550 Fetching value of define "__znver1__" : (undefined) 00:02:52.550 Fetching value of define "__znver2__" : (undefined) 00:02:52.550 Fetching value of define "__znver3__" : (undefined) 00:02:52.550 Fetching value of define "__znver4__" : (undefined) 00:02:52.550 Library asan found: YES 00:02:52.550 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:52.550 Message: lib/log: Defining dependency "log" 00:02:52.550 Message: lib/kvargs: Defining dependency "kvargs" 00:02:52.550 Message: lib/telemetry: Defining dependency "telemetry" 00:02:52.550 Library rt found: YES 00:02:52.550 Checking for function "getentropy" : NO 00:02:52.550 Message: lib/eal: Defining dependency "eal" 00:02:52.550 Message: lib/ring: Defining dependency "ring" 00:02:52.550 Message: lib/rcu: Defining dependency "rcu" 00:02:52.550 Message: lib/mempool: Defining dependency "mempool" 00:02:52.550 Message: lib/mbuf: Defining dependency "mbuf" 00:02:52.550 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:52.550 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:52.550 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:52.550 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:52.550 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:52.550 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:52.550 Compiler for C supports arguments -mpclmul: YES 00:02:52.550 Compiler for C supports arguments -maes: YES 00:02:52.550 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:52.550 Compiler for C supports arguments -mavx512bw: YES 00:02:52.550 Compiler for C supports arguments -mavx512dq: YES 00:02:52.550 Compiler for C supports arguments -mavx512vl: YES 00:02:52.550 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:52.550 Compiler for C supports arguments -mavx2: YES 00:02:52.550 Compiler for C supports arguments -mavx: YES 00:02:52.550 Message: lib/net: Defining dependency "net" 00:02:52.550 Message: lib/meter: Defining dependency "meter" 00:02:52.550 Message: lib/ethdev: Defining dependency "ethdev" 00:02:52.550 Message: lib/pci: Defining dependency "pci" 00:02:52.550 Message: lib/cmdline: Defining dependency "cmdline" 00:02:52.550 Message: lib/hash: Defining dependency "hash" 00:02:52.550 Message: lib/timer: Defining dependency "timer" 00:02:52.550 Message: lib/compressdev: Defining dependency "compressdev" 00:02:52.550 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:52.550 Message: lib/dmadev: Defining dependency "dmadev" 00:02:52.550 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:52.550 Message: lib/power: Defining dependency "power" 00:02:52.550 Message: lib/reorder: Defining dependency "reorder" 00:02:52.550 Message: lib/security: Defining dependency "security" 00:02:52.550 Has header "linux/userfaultfd.h" : YES 00:02:52.550 Has header "linux/vduse.h" : YES 00:02:52.550 Message: lib/vhost: Defining dependency "vhost" 00:02:52.550 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:52.550 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:52.551 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:52.551 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:52.551 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:52.551 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:52.551 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:52.551 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:52.551 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:52.551 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:52.551 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:52.551 Configuring doxy-api-html.conf using configuration 00:02:52.551 Configuring doxy-api-man.conf using configuration 00:02:52.551 Program mandb found: YES (/usr/bin/mandb) 00:02:52.551 Program sphinx-build found: NO 00:02:52.551 Configuring rte_build_config.h using configuration 00:02:52.551 Message: 00:02:52.551 ================= 00:02:52.551 Applications Enabled 00:02:52.551 ================= 00:02:52.551 00:02:52.551 apps: 00:02:52.551 00:02:52.551 00:02:52.551 Message: 00:02:52.551 ================= 00:02:52.551 Libraries Enabled 00:02:52.551 ================= 00:02:52.551 00:02:52.551 libs: 00:02:52.551 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:52.551 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:52.551 cryptodev, dmadev, power, reorder, security, vhost, 00:02:52.551 00:02:52.551 Message: 00:02:52.551 =============== 00:02:52.551 Drivers Enabled 00:02:52.551 =============== 00:02:52.551 00:02:52.551 common: 00:02:52.551 00:02:52.551 bus: 00:02:52.551 pci, vdev, 00:02:52.551 mempool: 00:02:52.551 ring, 00:02:52.551 dma: 00:02:52.551 00:02:52.551 net: 00:02:52.551 00:02:52.551 crypto: 00:02:52.551 00:02:52.551 compress: 00:02:52.551 00:02:52.551 vdpa: 00:02:52.551 00:02:52.551 00:02:52.551 Message: 00:02:52.551 ================= 00:02:52.551 Content Skipped 00:02:52.551 ================= 00:02:52.551 00:02:52.551 apps: 00:02:52.551 dumpcap: explicitly disabled via build config 00:02:52.551 graph: explicitly disabled via build config 00:02:52.551 pdump: explicitly disabled via build config 00:02:52.551 proc-info: explicitly disabled via build config 00:02:52.551 test-acl: explicitly disabled via build config 00:02:52.551 test-bbdev: explicitly disabled via build config 00:02:52.551 test-cmdline: explicitly disabled via build config 00:02:52.551 test-compress-perf: explicitly disabled via build config 00:02:52.551 test-crypto-perf: explicitly disabled via build config 00:02:52.551 test-dma-perf: explicitly disabled via build config 00:02:52.551 test-eventdev: explicitly disabled via build config 00:02:52.551 test-fib: explicitly disabled via build config 00:02:52.551 test-flow-perf: explicitly disabled via build config 00:02:52.551 test-gpudev: explicitly disabled via build config 00:02:52.551 test-mldev: explicitly disabled via build config 00:02:52.551 test-pipeline: explicitly disabled via build config 00:02:52.551 test-pmd: explicitly disabled via build config 00:02:52.551 test-regex: explicitly disabled via build config 00:02:52.551 test-sad: explicitly disabled via build config 00:02:52.551 test-security-perf: explicitly disabled via build config 00:02:52.551 00:02:52.551 libs: 00:02:52.551 argparse: explicitly disabled via build config 00:02:52.551 metrics: explicitly disabled via build config 00:02:52.551 acl: explicitly disabled via build config 00:02:52.551 bbdev: explicitly disabled via build config 00:02:52.551 bitratestats: explicitly disabled via build config 00:02:52.551 bpf: explicitly disabled via build config 00:02:52.551 cfgfile: explicitly disabled via build config 00:02:52.551 distributor: explicitly disabled via build config 00:02:52.551 efd: explicitly disabled via build config 00:02:52.551 eventdev: explicitly disabled via build config 00:02:52.551 dispatcher: explicitly disabled via build config 00:02:52.551 gpudev: explicitly disabled via build config 00:02:52.551 gro: explicitly disabled via build config 00:02:52.551 gso: explicitly disabled via build config 00:02:52.551 ip_frag: explicitly disabled via build config 00:02:52.551 jobstats: explicitly disabled via build config 00:02:52.551 latencystats: explicitly disabled via build config 00:02:52.551 lpm: explicitly disabled via build config 00:02:52.551 member: explicitly disabled via build config 00:02:52.551 pcapng: explicitly disabled via build config 00:02:52.551 rawdev: explicitly disabled via build config 00:02:52.551 regexdev: explicitly disabled via build config 00:02:52.551 mldev: explicitly disabled via build config 00:02:52.551 rib: explicitly disabled via build config 00:02:52.551 sched: explicitly disabled via build config 00:02:52.551 stack: explicitly disabled via build config 00:02:52.551 ipsec: explicitly disabled via build config 00:02:52.551 pdcp: explicitly disabled via build config 00:02:52.551 fib: explicitly disabled via build config 00:02:52.551 port: explicitly disabled via build config 00:02:52.551 pdump: explicitly disabled via build config 00:02:52.551 table: explicitly disabled via build config 00:02:52.551 pipeline: explicitly disabled via build config 00:02:52.551 graph: explicitly disabled via build config 00:02:52.551 node: explicitly disabled via build config 00:02:52.551 00:02:52.551 drivers: 00:02:52.551 common/cpt: not in enabled drivers build config 00:02:52.551 common/dpaax: not in enabled drivers build config 00:02:52.551 common/iavf: not in enabled drivers build config 00:02:52.551 common/idpf: not in enabled drivers build config 00:02:52.551 common/ionic: not in enabled drivers build config 00:02:52.551 common/mvep: not in enabled drivers build config 00:02:52.551 common/octeontx: not in enabled drivers build config 00:02:52.551 bus/auxiliary: not in enabled drivers build config 00:02:52.551 bus/cdx: not in enabled drivers build config 00:02:52.551 bus/dpaa: not in enabled drivers build config 00:02:52.551 bus/fslmc: not in enabled drivers build config 00:02:52.551 bus/ifpga: not in enabled drivers build config 00:02:52.551 bus/platform: not in enabled drivers build config 00:02:52.551 bus/uacce: not in enabled drivers build config 00:02:52.551 bus/vmbus: not in enabled drivers build config 00:02:52.551 common/cnxk: not in enabled drivers build config 00:02:52.551 common/mlx5: not in enabled drivers build config 00:02:52.551 common/nfp: not in enabled drivers build config 00:02:52.551 common/nitrox: not in enabled drivers build config 00:02:52.551 common/qat: not in enabled drivers build config 00:02:52.551 common/sfc_efx: not in enabled drivers build config 00:02:52.551 mempool/bucket: not in enabled drivers build config 00:02:52.551 mempool/cnxk: not in enabled drivers build config 00:02:52.551 mempool/dpaa: not in enabled drivers build config 00:02:52.551 mempool/dpaa2: not in enabled drivers build config 00:02:52.551 mempool/octeontx: not in enabled drivers build config 00:02:52.551 mempool/stack: not in enabled drivers build config 00:02:52.551 dma/cnxk: not in enabled drivers build config 00:02:52.551 dma/dpaa: not in enabled drivers build config 00:02:52.551 dma/dpaa2: not in enabled drivers build config 00:02:52.551 dma/hisilicon: not in enabled drivers build config 00:02:52.551 dma/idxd: not in enabled drivers build config 00:02:52.551 dma/ioat: not in enabled drivers build config 00:02:52.551 dma/skeleton: not in enabled drivers build config 00:02:52.551 net/af_packet: not in enabled drivers build config 00:02:52.551 net/af_xdp: not in enabled drivers build config 00:02:52.551 net/ark: not in enabled drivers build config 00:02:52.551 net/atlantic: not in enabled drivers build config 00:02:52.551 net/avp: not in enabled drivers build config 00:02:52.551 net/axgbe: not in enabled drivers build config 00:02:52.551 net/bnx2x: not in enabled drivers build config 00:02:52.551 net/bnxt: not in enabled drivers build config 00:02:52.551 net/bonding: not in enabled drivers build config 00:02:52.551 net/cnxk: not in enabled drivers build config 00:02:52.551 net/cpfl: not in enabled drivers build config 00:02:52.551 net/cxgbe: not in enabled drivers build config 00:02:52.551 net/dpaa: not in enabled drivers build config 00:02:52.551 net/dpaa2: not in enabled drivers build config 00:02:52.551 net/e1000: not in enabled drivers build config 00:02:52.551 net/ena: not in enabled drivers build config 00:02:52.551 net/enetc: not in enabled drivers build config 00:02:52.551 net/enetfec: not in enabled drivers build config 00:02:52.551 net/enic: not in enabled drivers build config 00:02:52.551 net/failsafe: not in enabled drivers build config 00:02:52.551 net/fm10k: not in enabled drivers build config 00:02:52.551 net/gve: not in enabled drivers build config 00:02:52.551 net/hinic: not in enabled drivers build config 00:02:52.551 net/hns3: not in enabled drivers build config 00:02:52.551 net/i40e: not in enabled drivers build config 00:02:52.551 net/iavf: not in enabled drivers build config 00:02:52.551 net/ice: not in enabled drivers build config 00:02:52.551 net/idpf: not in enabled drivers build config 00:02:52.551 net/igc: not in enabled drivers build config 00:02:52.551 net/ionic: not in enabled drivers build config 00:02:52.551 net/ipn3ke: not in enabled drivers build config 00:02:52.551 net/ixgbe: not in enabled drivers build config 00:02:52.551 net/mana: not in enabled drivers build config 00:02:52.551 net/memif: not in enabled drivers build config 00:02:52.551 net/mlx4: not in enabled drivers build config 00:02:52.551 net/mlx5: not in enabled drivers build config 00:02:52.551 net/mvneta: not in enabled drivers build config 00:02:52.551 net/mvpp2: not in enabled drivers build config 00:02:52.551 net/netvsc: not in enabled drivers build config 00:02:52.551 net/nfb: not in enabled drivers build config 00:02:52.551 net/nfp: not in enabled drivers build config 00:02:52.551 net/ngbe: not in enabled drivers build config 00:02:52.551 net/null: not in enabled drivers build config 00:02:52.551 net/octeontx: not in enabled drivers build config 00:02:52.551 net/octeon_ep: not in enabled drivers build config 00:02:52.551 net/pcap: not in enabled drivers build config 00:02:52.551 net/pfe: not in enabled drivers build config 00:02:52.551 net/qede: not in enabled drivers build config 00:02:52.551 net/ring: not in enabled drivers build config 00:02:52.551 net/sfc: not in enabled drivers build config 00:02:52.551 net/softnic: not in enabled drivers build config 00:02:52.551 net/tap: not in enabled drivers build config 00:02:52.551 net/thunderx: not in enabled drivers build config 00:02:52.551 net/txgbe: not in enabled drivers build config 00:02:52.551 net/vdev_netvsc: not in enabled drivers build config 00:02:52.551 net/vhost: not in enabled drivers build config 00:02:52.551 net/virtio: not in enabled drivers build config 00:02:52.551 net/vmxnet3: not in enabled drivers build config 00:02:52.552 raw/*: missing internal dependency, "rawdev" 00:02:52.552 crypto/armv8: not in enabled drivers build config 00:02:52.552 crypto/bcmfs: not in enabled drivers build config 00:02:52.552 crypto/caam_jr: not in enabled drivers build config 00:02:52.552 crypto/ccp: not in enabled drivers build config 00:02:52.552 crypto/cnxk: not in enabled drivers build config 00:02:52.552 crypto/dpaa_sec: not in enabled drivers build config 00:02:52.552 crypto/dpaa2_sec: not in enabled drivers build config 00:02:52.552 crypto/ipsec_mb: not in enabled drivers build config 00:02:52.552 crypto/mlx5: not in enabled drivers build config 00:02:52.552 crypto/mvsam: not in enabled drivers build config 00:02:52.552 crypto/nitrox: not in enabled drivers build config 00:02:52.552 crypto/null: not in enabled drivers build config 00:02:52.552 crypto/octeontx: not in enabled drivers build config 00:02:52.552 crypto/openssl: not in enabled drivers build config 00:02:52.552 crypto/scheduler: not in enabled drivers build config 00:02:52.552 crypto/uadk: not in enabled drivers build config 00:02:52.552 crypto/virtio: not in enabled drivers build config 00:02:52.552 compress/isal: not in enabled drivers build config 00:02:52.552 compress/mlx5: not in enabled drivers build config 00:02:52.552 compress/nitrox: not in enabled drivers build config 00:02:52.552 compress/octeontx: not in enabled drivers build config 00:02:52.552 compress/zlib: not in enabled drivers build config 00:02:52.552 regex/*: missing internal dependency, "regexdev" 00:02:52.552 ml/*: missing internal dependency, "mldev" 00:02:52.552 vdpa/ifc: not in enabled drivers build config 00:02:52.552 vdpa/mlx5: not in enabled drivers build config 00:02:52.552 vdpa/nfp: not in enabled drivers build config 00:02:52.552 vdpa/sfc: not in enabled drivers build config 00:02:52.552 event/*: missing internal dependency, "eventdev" 00:02:52.552 baseband/*: missing internal dependency, "bbdev" 00:02:52.552 gpu/*: missing internal dependency, "gpudev" 00:02:52.552 00:02:52.552 00:02:52.552 Build targets in project: 84 00:02:52.552 00:02:52.552 DPDK 24.03.0 00:02:52.552 00:02:52.552 User defined options 00:02:52.552 buildtype : debug 00:02:52.552 default_library : shared 00:02:52.552 libdir : lib 00:02:52.552 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:52.552 b_sanitize : address 00:02:52.552 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:52.552 c_link_args : 00:02:52.552 cpu_instruction_set: native 00:02:52.552 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:52.552 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:52.552 enable_docs : false 00:02:52.552 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:52.552 enable_kmods : false 00:02:52.552 max_lcores : 128 00:02:52.552 tests : false 00:02:52.552 00:02:52.552 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:53.119 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:53.119 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:53.119 [2/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:53.119 [3/267] Linking static target lib/librte_kvargs.a 00:02:53.119 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:53.119 [5/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:53.119 [6/267] Linking static target lib/librte_log.a 00:02:53.377 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:53.377 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:53.635 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:53.635 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:53.635 [11/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:53.635 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:53.635 [13/267] Linking static target lib/librte_telemetry.a 00:02:53.635 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:53.635 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:53.635 [16/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.635 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:53.635 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:53.894 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:53.894 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:53.894 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:53.894 [22/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.894 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:54.151 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:54.151 [25/267] Linking target lib/librte_log.so.24.1 00:02:54.151 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:54.152 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:54.152 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:54.152 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:54.152 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:54.152 [31/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.152 [32/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:54.410 [33/267] Linking target lib/librte_kvargs.so.24.1 00:02:54.410 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:54.410 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:54.410 [36/267] Linking target lib/librte_telemetry.so.24.1 00:02:54.410 [37/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:54.410 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:54.410 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:54.410 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:54.668 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:54.668 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:54.669 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:54.669 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:54.669 [45/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:54.669 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:54.669 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:54.669 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:54.927 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:54.927 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:54.927 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:54.927 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:55.186 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:55.186 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:55.186 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:55.186 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:55.186 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:55.186 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:55.186 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:55.186 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:55.186 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:55.186 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:55.444 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:55.444 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:55.444 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:55.444 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:55.702 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:55.702 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:55.702 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:55.702 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:55.702 [71/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:55.702 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:55.702 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:55.702 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:55.702 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:55.961 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:55.961 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:55.961 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:55.961 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:55.961 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:55.961 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:55.961 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:56.219 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:56.219 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:56.219 [85/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:56.219 [86/267] Linking static target lib/librte_ring.a 00:02:56.219 [87/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:56.219 [88/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:56.478 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:56.478 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:56.478 [91/267] Linking static target lib/librte_eal.a 00:02:56.478 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:56.478 [93/267] Linking static target lib/librte_mempool.a 00:02:56.478 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:56.478 [95/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:56.478 [96/267] Linking static target lib/librte_rcu.a 00:02:56.737 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:56.737 [98/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.737 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:56.737 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:56.737 [101/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:56.737 [102/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:56.737 [103/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:56.995 [104/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.995 [105/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:56.995 [106/267] Linking static target lib/librte_meter.a 00:02:57.256 [107/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:57.256 [108/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:57.256 [109/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:57.256 [110/267] Linking static target lib/librte_net.a 00:02:57.256 [111/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:57.256 [112/267] Linking static target lib/librte_mbuf.a 00:02:57.256 [113/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.256 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:57.256 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:57.514 [116/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.514 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:57.514 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:57.514 [119/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.772 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:57.772 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:57.772 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:58.031 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:58.031 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:58.031 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:58.290 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:58.290 [127/267] Linking static target lib/librte_pci.a 00:02:58.290 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:58.290 [129/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.290 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:58.290 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:58.290 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:58.290 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:58.290 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:58.290 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:58.290 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:58.548 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:58.548 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:58.548 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:58.548 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:58.548 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:58.548 [142/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.548 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:58.548 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:58.548 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:58.548 [146/267] Linking static target lib/librte_cmdline.a 00:02:58.806 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:58.806 [148/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:58.806 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:58.806 [150/267] Linking static target lib/librte_timer.a 00:02:58.806 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:59.064 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:59.064 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:59.065 [154/267] Linking static target lib/librte_ethdev.a 00:02:59.065 [155/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:59.065 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:59.323 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:59.323 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:59.323 [159/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:59.323 [160/267] Linking static target lib/librte_compressdev.a 00:02:59.323 [161/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.323 [162/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:59.323 [163/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:59.581 [164/267] Linking static target lib/librte_hash.a 00:02:59.581 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:59.581 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:59.581 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:59.581 [168/267] Linking static target lib/librte_dmadev.a 00:02:59.581 [169/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:59.839 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:59.839 [171/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:59.839 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:59.839 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.098 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.098 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:00.098 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:00.098 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:00.098 [178/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:00.098 [179/267] Linking static target lib/librte_cryptodev.a 00:03:00.356 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:00.356 [181/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:00.356 [182/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.357 [183/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.615 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:00.615 [185/267] Linking static target lib/librte_power.a 00:03:00.615 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:00.615 [187/267] Linking static target lib/librte_reorder.a 00:03:00.615 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:00.615 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:00.615 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:00.615 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:00.875 [192/267] Linking static target lib/librte_security.a 00:03:01.137 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.137 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:01.395 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.395 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:01.395 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:01.395 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:01.395 [199/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.654 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:01.654 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:01.654 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:01.654 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:01.912 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:01.912 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:01.912 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:01.912 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:01.912 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:01.912 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:02.171 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.171 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:02.171 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:02.171 [213/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:02.171 [214/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:02.171 [215/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:02.171 [216/267] Linking static target drivers/librte_bus_vdev.a 00:03:02.171 [217/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:02.171 [218/267] Linking static target drivers/librte_bus_pci.a 00:03:02.429 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:02.429 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:02.429 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.429 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:02.429 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:02.429 [224/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:02.429 [225/267] Linking static target drivers/librte_mempool_ring.a 00:03:02.687 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.254 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:04.189 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.189 [229/267] Linking target lib/librte_eal.so.24.1 00:03:04.189 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:04.448 [231/267] Linking target lib/librte_ring.so.24.1 00:03:04.448 [232/267] Linking target lib/librte_pci.so.24.1 00:03:04.448 [233/267] Linking target lib/librte_meter.so.24.1 00:03:04.448 [234/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:04.448 [235/267] Linking target lib/librte_dmadev.so.24.1 00:03:04.448 [236/267] Linking target lib/librte_timer.so.24.1 00:03:04.448 [237/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:04.448 [238/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:04.448 [239/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:04.448 [240/267] Linking target lib/librte_mempool.so.24.1 00:03:04.448 [241/267] Linking target lib/librte_rcu.so.24.1 00:03:04.448 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:04.448 [243/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:04.448 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:04.448 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:04.706 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:04.706 [247/267] Linking target lib/librte_mbuf.so.24.1 00:03:04.706 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:04.706 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:04.706 [250/267] Linking target lib/librte_net.so.24.1 00:03:04.706 [251/267] Linking target lib/librte_reorder.so.24.1 00:03:04.706 [252/267] Linking target lib/librte_compressdev.so.24.1 00:03:04.706 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:03:04.965 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:04.965 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:04.965 [256/267] Linking target lib/librte_cmdline.so.24.1 00:03:04.965 [257/267] Linking target lib/librte_hash.so.24.1 00:03:04.965 [258/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.965 [259/267] Linking target lib/librte_security.so.24.1 00:03:04.965 [260/267] Linking target lib/librte_ethdev.so.24.1 00:03:04.965 [261/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:04.965 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:05.223 [263/267] Linking target lib/librte_power.so.24.1 00:03:06.644 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:06.644 [265/267] Linking static target lib/librte_vhost.a 00:03:07.578 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.578 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:07.837 INFO: autodetecting backend as ninja 00:03:07.837 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:22.713 CC lib/ut/ut.o 00:03:22.713 CC lib/log/log.o 00:03:22.713 CC lib/log/log_flags.o 00:03:22.713 CC lib/log/log_deprecated.o 00:03:22.713 CC lib/ut_mock/mock.o 00:03:22.713 LIB libspdk_ut.a 00:03:22.713 LIB libspdk_log.a 00:03:22.713 LIB libspdk_ut_mock.a 00:03:22.713 SO libspdk_ut.so.2.0 00:03:22.713 SO libspdk_ut_mock.so.6.0 00:03:22.713 SO libspdk_log.so.7.1 00:03:22.713 SYMLINK libspdk_ut.so 00:03:22.713 SYMLINK libspdk_ut_mock.so 00:03:22.713 SYMLINK libspdk_log.so 00:03:22.713 CC lib/util/base64.o 00:03:22.713 CC lib/dma/dma.o 00:03:22.713 CC lib/util/bit_array.o 00:03:22.713 CC lib/util/crc16.o 00:03:22.713 CC lib/util/cpuset.o 00:03:22.713 CC lib/util/crc32c.o 00:03:22.713 CC lib/util/crc32.o 00:03:22.713 CXX lib/trace_parser/trace.o 00:03:22.713 CC lib/ioat/ioat.o 00:03:22.713 CC lib/util/crc32_ieee.o 00:03:22.713 CC lib/vfio_user/host/vfio_user_pci.o 00:03:22.713 CC lib/util/crc64.o 00:03:22.713 CC lib/util/dif.o 00:03:22.713 CC lib/util/fd.o 00:03:22.713 LIB libspdk_dma.a 00:03:22.713 CC lib/util/fd_group.o 00:03:22.713 CC lib/util/file.o 00:03:22.713 SO libspdk_dma.so.5.0 00:03:22.713 CC lib/util/hexlify.o 00:03:22.713 CC lib/util/iov.o 00:03:22.713 SYMLINK libspdk_dma.so 00:03:22.713 CC lib/util/math.o 00:03:22.713 LIB libspdk_ioat.a 00:03:22.713 CC lib/util/net.o 00:03:22.713 CC lib/util/pipe.o 00:03:22.713 SO libspdk_ioat.so.7.0 00:03:22.713 SYMLINK libspdk_ioat.so 00:03:22.713 CC lib/vfio_user/host/vfio_user.o 00:03:22.713 CC lib/util/strerror_tls.o 00:03:22.713 CC lib/util/string.o 00:03:22.713 CC lib/util/uuid.o 00:03:22.713 CC lib/util/xor.o 00:03:22.713 CC lib/util/zipf.o 00:03:22.713 CC lib/util/md5.o 00:03:22.713 LIB libspdk_vfio_user.a 00:03:22.713 SO libspdk_vfio_user.so.5.0 00:03:22.713 SYMLINK libspdk_vfio_user.so 00:03:22.713 LIB libspdk_util.a 00:03:22.713 LIB libspdk_trace_parser.a 00:03:22.713 SO libspdk_util.so.10.1 00:03:22.713 SO libspdk_trace_parser.so.6.0 00:03:22.713 SYMLINK libspdk_trace_parser.so 00:03:22.713 SYMLINK libspdk_util.so 00:03:22.713 CC lib/vmd/led.o 00:03:22.713 CC lib/vmd/vmd.o 00:03:22.713 CC lib/env_dpdk/env.o 00:03:22.713 CC lib/idxd/idxd.o 00:03:22.713 CC lib/idxd/idxd_user.o 00:03:22.713 CC lib/env_dpdk/memory.o 00:03:22.713 CC lib/env_dpdk/pci.o 00:03:22.713 CC lib/rdma_utils/rdma_utils.o 00:03:22.714 CC lib/json/json_parse.o 00:03:22.714 CC lib/conf/conf.o 00:03:22.971 CC lib/env_dpdk/init.o 00:03:22.971 CC lib/json/json_util.o 00:03:22.972 LIB libspdk_conf.a 00:03:22.972 CC lib/json/json_write.o 00:03:22.972 SO libspdk_conf.so.6.0 00:03:22.972 LIB libspdk_rdma_utils.a 00:03:22.972 SO libspdk_rdma_utils.so.1.0 00:03:23.230 SYMLINK libspdk_conf.so 00:03:23.230 CC lib/idxd/idxd_kernel.o 00:03:23.230 SYMLINK libspdk_rdma_utils.so 00:03:23.230 CC lib/env_dpdk/threads.o 00:03:23.230 CC lib/env_dpdk/pci_ioat.o 00:03:23.230 CC lib/env_dpdk/pci_virtio.o 00:03:23.230 CC lib/env_dpdk/pci_vmd.o 00:03:23.230 CC lib/env_dpdk/pci_idxd.o 00:03:23.230 CC lib/env_dpdk/pci_event.o 00:03:23.230 LIB libspdk_idxd.a 00:03:23.230 LIB libspdk_json.a 00:03:23.230 CC lib/env_dpdk/sigbus_handler.o 00:03:23.230 SO libspdk_idxd.so.12.1 00:03:23.230 SO libspdk_json.so.6.0 00:03:23.230 CC lib/env_dpdk/pci_dpdk.o 00:03:23.488 SYMLINK libspdk_idxd.so 00:03:23.488 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:23.488 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:23.488 SYMLINK libspdk_json.so 00:03:23.488 CC lib/rdma_provider/common.o 00:03:23.488 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:23.488 LIB libspdk_vmd.a 00:03:23.488 SO libspdk_vmd.so.6.0 00:03:23.488 SYMLINK libspdk_vmd.so 00:03:23.488 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:23.488 CC lib/jsonrpc/jsonrpc_server.o 00:03:23.488 CC lib/jsonrpc/jsonrpc_client.o 00:03:23.488 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:23.746 LIB libspdk_rdma_provider.a 00:03:23.746 SO libspdk_rdma_provider.so.7.0 00:03:23.746 SYMLINK libspdk_rdma_provider.so 00:03:23.746 LIB libspdk_jsonrpc.a 00:03:23.746 SO libspdk_jsonrpc.so.6.0 00:03:24.005 SYMLINK libspdk_jsonrpc.so 00:03:24.282 CC lib/rpc/rpc.o 00:03:24.282 LIB libspdk_env_dpdk.a 00:03:24.282 SO libspdk_env_dpdk.so.15.1 00:03:24.282 SYMLINK libspdk_env_dpdk.so 00:03:24.282 LIB libspdk_rpc.a 00:03:24.549 SO libspdk_rpc.so.6.0 00:03:24.549 SYMLINK libspdk_rpc.so 00:03:24.549 CC lib/trace/trace.o 00:03:24.549 CC lib/trace/trace_flags.o 00:03:24.549 CC lib/trace/trace_rpc.o 00:03:24.549 CC lib/notify/notify_rpc.o 00:03:24.549 CC lib/notify/notify.o 00:03:24.808 CC lib/keyring/keyring.o 00:03:24.808 CC lib/keyring/keyring_rpc.o 00:03:24.808 LIB libspdk_notify.a 00:03:24.808 SO libspdk_notify.so.6.0 00:03:24.808 SYMLINK libspdk_notify.so 00:03:24.808 LIB libspdk_trace.a 00:03:24.808 LIB libspdk_keyring.a 00:03:24.808 SO libspdk_trace.so.11.0 00:03:24.808 SO libspdk_keyring.so.2.0 00:03:25.066 SYMLINK libspdk_keyring.so 00:03:25.066 SYMLINK libspdk_trace.so 00:03:25.324 CC lib/sock/sock_rpc.o 00:03:25.324 CC lib/sock/sock.o 00:03:25.324 CC lib/thread/thread.o 00:03:25.324 CC lib/thread/iobuf.o 00:03:25.582 LIB libspdk_sock.a 00:03:25.582 SO libspdk_sock.so.10.0 00:03:25.582 SYMLINK libspdk_sock.so 00:03:26.149 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:26.149 CC lib/nvme/nvme_ctrlr.o 00:03:26.149 CC lib/nvme/nvme_fabric.o 00:03:26.149 CC lib/nvme/nvme_ns_cmd.o 00:03:26.150 CC lib/nvme/nvme_pcie_common.o 00:03:26.150 CC lib/nvme/nvme_ns.o 00:03:26.150 CC lib/nvme/nvme_pcie.o 00:03:26.150 CC lib/nvme/nvme.o 00:03:26.150 CC lib/nvme/nvme_qpair.o 00:03:26.715 CC lib/nvme/nvme_quirks.o 00:03:26.716 CC lib/nvme/nvme_transport.o 00:03:26.716 CC lib/nvme/nvme_discovery.o 00:03:26.716 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:26.716 LIB libspdk_thread.a 00:03:26.716 SO libspdk_thread.so.11.0 00:03:26.716 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:26.716 CC lib/nvme/nvme_tcp.o 00:03:26.973 SYMLINK libspdk_thread.so 00:03:26.973 CC lib/nvme/nvme_opal.o 00:03:26.973 CC lib/nvme/nvme_io_msg.o 00:03:26.973 CC lib/nvme/nvme_poll_group.o 00:03:26.973 CC lib/nvme/nvme_zns.o 00:03:27.231 CC lib/nvme/nvme_stubs.o 00:03:27.231 CC lib/nvme/nvme_auth.o 00:03:27.231 CC lib/nvme/nvme_cuse.o 00:03:27.231 CC lib/nvme/nvme_rdma.o 00:03:27.489 CC lib/accel/accel.o 00:03:27.489 CC lib/blob/blobstore.o 00:03:27.489 CC lib/blob/request.o 00:03:27.747 CC lib/blob/zeroes.o 00:03:27.747 CC lib/blob/blob_bs_dev.o 00:03:27.747 CC lib/accel/accel_rpc.o 00:03:28.005 CC lib/accel/accel_sw.o 00:03:28.005 CC lib/init/json_config.o 00:03:28.005 CC lib/init/subsystem.o 00:03:28.005 CC lib/virtio/virtio.o 00:03:28.005 CC lib/fsdev/fsdev.o 00:03:28.005 CC lib/virtio/virtio_vhost_user.o 00:03:28.262 CC lib/init/subsystem_rpc.o 00:03:28.262 CC lib/fsdev/fsdev_io.o 00:03:28.263 CC lib/fsdev/fsdev_rpc.o 00:03:28.263 CC lib/init/rpc.o 00:03:28.263 CC lib/virtio/virtio_vfio_user.o 00:03:28.263 CC lib/virtio/virtio_pci.o 00:03:28.521 LIB libspdk_init.a 00:03:28.521 SO libspdk_init.so.6.0 00:03:28.521 SYMLINK libspdk_init.so 00:03:28.521 LIB libspdk_virtio.a 00:03:28.781 LIB libspdk_accel.a 00:03:28.781 SO libspdk_virtio.so.7.0 00:03:28.781 SO libspdk_accel.so.16.0 00:03:28.781 CC lib/event/app.o 00:03:28.781 CC lib/event/reactor.o 00:03:28.781 SYMLINK libspdk_virtio.so 00:03:28.781 CC lib/event/app_rpc.o 00:03:28.781 CC lib/event/log_rpc.o 00:03:28.781 CC lib/event/scheduler_static.o 00:03:28.781 LIB libspdk_fsdev.a 00:03:28.781 SYMLINK libspdk_accel.so 00:03:28.781 SO libspdk_fsdev.so.2.0 00:03:28.781 LIB libspdk_nvme.a 00:03:28.781 SYMLINK libspdk_fsdev.so 00:03:29.042 CC lib/bdev/bdev.o 00:03:29.042 CC lib/bdev/bdev_rpc.o 00:03:29.042 CC lib/bdev/bdev_zone.o 00:03:29.042 CC lib/bdev/part.o 00:03:29.042 CC lib/bdev/scsi_nvme.o 00:03:29.042 SO libspdk_nvme.so.15.0 00:03:29.042 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:29.302 LIB libspdk_event.a 00:03:29.302 SYMLINK libspdk_nvme.so 00:03:29.302 SO libspdk_event.so.14.0 00:03:29.302 SYMLINK libspdk_event.so 00:03:29.870 LIB libspdk_fuse_dispatcher.a 00:03:29.870 SO libspdk_fuse_dispatcher.so.1.0 00:03:29.870 SYMLINK libspdk_fuse_dispatcher.so 00:03:30.805 LIB libspdk_blob.a 00:03:31.064 SO libspdk_blob.so.12.0 00:03:31.064 SYMLINK libspdk_blob.so 00:03:31.323 CC lib/blobfs/blobfs.o 00:03:31.323 CC lib/blobfs/tree.o 00:03:31.323 CC lib/lvol/lvol.o 00:03:31.888 LIB libspdk_bdev.a 00:03:31.888 SO libspdk_bdev.so.17.0 00:03:32.146 SYMLINK libspdk_bdev.so 00:03:32.146 LIB libspdk_blobfs.a 00:03:32.146 SO libspdk_blobfs.so.11.0 00:03:32.146 CC lib/ftl/ftl_core.o 00:03:32.146 CC lib/ftl/ftl_init.o 00:03:32.146 CC lib/ublk/ublk.o 00:03:32.146 CC lib/ftl/ftl_layout.o 00:03:32.146 CC lib/ublk/ublk_rpc.o 00:03:32.146 CC lib/scsi/dev.o 00:03:32.146 CC lib/nbd/nbd.o 00:03:32.146 CC lib/nvmf/ctrlr.o 00:03:32.146 SYMLINK libspdk_blobfs.so 00:03:32.146 CC lib/nvmf/ctrlr_discovery.o 00:03:32.146 LIB libspdk_lvol.a 00:03:32.404 SO libspdk_lvol.so.11.0 00:03:32.404 CC lib/scsi/lun.o 00:03:32.404 SYMLINK libspdk_lvol.so 00:03:32.404 CC lib/ftl/ftl_debug.o 00:03:32.404 CC lib/ftl/ftl_io.o 00:03:32.404 CC lib/ftl/ftl_sb.o 00:03:32.404 CC lib/ftl/ftl_l2p.o 00:03:32.663 CC lib/scsi/port.o 00:03:32.663 CC lib/ftl/ftl_l2p_flat.o 00:03:32.663 CC lib/ftl/ftl_nv_cache.o 00:03:32.663 CC lib/ftl/ftl_band.o 00:03:32.663 CC lib/nbd/nbd_rpc.o 00:03:32.663 CC lib/ftl/ftl_band_ops.o 00:03:32.663 CC lib/ftl/ftl_writer.o 00:03:32.663 CC lib/scsi/scsi.o 00:03:32.663 CC lib/nvmf/ctrlr_bdev.o 00:03:32.663 CC lib/nvmf/subsystem.o 00:03:32.663 LIB libspdk_nbd.a 00:03:32.921 SO libspdk_nbd.so.7.0 00:03:32.921 LIB libspdk_ublk.a 00:03:32.921 CC lib/scsi/scsi_bdev.o 00:03:32.921 SO libspdk_ublk.so.3.0 00:03:32.921 CC lib/scsi/scsi_pr.o 00:03:32.921 SYMLINK libspdk_nbd.so 00:03:32.921 CC lib/scsi/scsi_rpc.o 00:03:32.921 SYMLINK libspdk_ublk.so 00:03:32.921 CC lib/scsi/task.o 00:03:32.921 CC lib/ftl/ftl_rq.o 00:03:32.921 CC lib/ftl/ftl_reloc.o 00:03:32.921 CC lib/ftl/ftl_l2p_cache.o 00:03:33.190 CC lib/nvmf/nvmf.o 00:03:33.190 CC lib/ftl/ftl_p2l.o 00:03:33.190 CC lib/ftl/ftl_p2l_log.o 00:03:33.190 CC lib/nvmf/nvmf_rpc.o 00:03:33.447 LIB libspdk_scsi.a 00:03:33.447 CC lib/ftl/mngt/ftl_mngt.o 00:03:33.447 SO libspdk_scsi.so.9.0 00:03:33.447 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:33.447 SYMLINK libspdk_scsi.so 00:03:33.447 CC lib/nvmf/transport.o 00:03:33.447 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:33.705 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:33.705 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:33.705 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:33.705 CC lib/iscsi/conn.o 00:03:33.705 CC lib/vhost/vhost.o 00:03:33.705 CC lib/vhost/vhost_rpc.o 00:03:33.963 CC lib/vhost/vhost_scsi.o 00:03:33.963 CC lib/vhost/vhost_blk.o 00:03:33.963 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:33.963 CC lib/vhost/rte_vhost_user.o 00:03:33.963 CC lib/nvmf/tcp.o 00:03:33.963 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:34.221 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:34.221 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:34.221 CC lib/iscsi/init_grp.o 00:03:34.480 CC lib/iscsi/iscsi.o 00:03:34.480 CC lib/iscsi/param.o 00:03:34.480 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:34.480 CC lib/iscsi/portal_grp.o 00:03:34.480 CC lib/iscsi/tgt_node.o 00:03:34.480 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:34.739 CC lib/iscsi/iscsi_subsystem.o 00:03:34.739 CC lib/iscsi/iscsi_rpc.o 00:03:34.739 CC lib/nvmf/stubs.o 00:03:34.739 CC lib/iscsi/task.o 00:03:34.739 CC lib/nvmf/mdns_server.o 00:03:34.739 CC lib/nvmf/rdma.o 00:03:34.997 LIB libspdk_vhost.a 00:03:34.997 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:34.997 CC lib/ftl/utils/ftl_conf.o 00:03:34.997 CC lib/ftl/utils/ftl_md.o 00:03:34.997 SO libspdk_vhost.so.8.0 00:03:34.997 CC lib/nvmf/auth.o 00:03:34.997 CC lib/ftl/utils/ftl_mempool.o 00:03:34.997 SYMLINK libspdk_vhost.so 00:03:34.997 CC lib/ftl/utils/ftl_bitmap.o 00:03:34.997 CC lib/ftl/utils/ftl_property.o 00:03:34.997 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:34.997 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:35.255 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:35.255 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:35.255 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:35.255 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:35.256 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:35.256 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:35.256 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:35.256 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:35.256 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:35.514 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:35.514 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:35.514 CC lib/ftl/base/ftl_base_dev.o 00:03:35.514 CC lib/ftl/base/ftl_base_bdev.o 00:03:35.514 CC lib/ftl/ftl_trace.o 00:03:35.514 LIB libspdk_iscsi.a 00:03:35.772 SO libspdk_iscsi.so.8.0 00:03:35.772 LIB libspdk_ftl.a 00:03:35.772 SYMLINK libspdk_iscsi.so 00:03:35.772 SO libspdk_ftl.so.9.0 00:03:36.030 SYMLINK libspdk_ftl.so 00:03:36.966 LIB libspdk_nvmf.a 00:03:36.966 SO libspdk_nvmf.so.20.0 00:03:37.224 SYMLINK libspdk_nvmf.so 00:03:37.483 CC module/env_dpdk/env_dpdk_rpc.o 00:03:37.483 CC module/accel/error/accel_error.o 00:03:37.483 CC module/accel/ioat/accel_ioat.o 00:03:37.483 CC module/sock/posix/posix.o 00:03:37.483 CC module/scheduler/gscheduler/gscheduler.o 00:03:37.483 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:37.483 CC module/fsdev/aio/fsdev_aio.o 00:03:37.483 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:37.483 CC module/keyring/file/keyring.o 00:03:37.483 CC module/blob/bdev/blob_bdev.o 00:03:37.483 LIB libspdk_env_dpdk_rpc.a 00:03:37.741 SO libspdk_env_dpdk_rpc.so.6.0 00:03:37.741 SYMLINK libspdk_env_dpdk_rpc.so 00:03:37.741 CC module/keyring/file/keyring_rpc.o 00:03:37.741 CC module/accel/error/accel_error_rpc.o 00:03:37.741 CC module/accel/ioat/accel_ioat_rpc.o 00:03:37.741 LIB libspdk_scheduler_gscheduler.a 00:03:37.741 LIB libspdk_scheduler_dpdk_governor.a 00:03:37.741 SO libspdk_scheduler_gscheduler.so.4.0 00:03:37.741 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:37.741 LIB libspdk_scheduler_dynamic.a 00:03:37.741 LIB libspdk_keyring_file.a 00:03:37.741 SO libspdk_scheduler_dynamic.so.4.0 00:03:37.741 SO libspdk_keyring_file.so.2.0 00:03:37.741 SYMLINK libspdk_scheduler_gscheduler.so 00:03:37.741 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:37.741 LIB libspdk_accel_error.a 00:03:37.741 LIB libspdk_accel_ioat.a 00:03:37.741 SYMLINK libspdk_keyring_file.so 00:03:37.741 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:37.741 SO libspdk_accel_error.so.2.0 00:03:37.741 SO libspdk_accel_ioat.so.6.0 00:03:37.741 LIB libspdk_blob_bdev.a 00:03:38.000 SYMLINK libspdk_scheduler_dynamic.so 00:03:38.000 SO libspdk_blob_bdev.so.12.0 00:03:38.000 CC module/fsdev/aio/linux_aio_mgr.o 00:03:38.000 SYMLINK libspdk_accel_error.so 00:03:38.000 SYMLINK libspdk_accel_ioat.so 00:03:38.000 SYMLINK libspdk_blob_bdev.so 00:03:38.000 CC module/keyring/linux/keyring.o 00:03:38.000 CC module/keyring/linux/keyring_rpc.o 00:03:38.000 CC module/accel/iaa/accel_iaa.o 00:03:38.000 CC module/accel/iaa/accel_iaa_rpc.o 00:03:38.000 CC module/accel/dsa/accel_dsa.o 00:03:38.000 CC module/accel/dsa/accel_dsa_rpc.o 00:03:38.000 LIB libspdk_keyring_linux.a 00:03:38.000 SO libspdk_keyring_linux.so.1.0 00:03:38.258 LIB libspdk_fsdev_aio.a 00:03:38.258 LIB libspdk_accel_iaa.a 00:03:38.258 SO libspdk_fsdev_aio.so.1.0 00:03:38.258 SYMLINK libspdk_keyring_linux.so 00:03:38.258 SO libspdk_accel_iaa.so.3.0 00:03:38.258 CC module/blobfs/bdev/blobfs_bdev.o 00:03:38.258 LIB libspdk_sock_posix.a 00:03:38.258 CC module/bdev/delay/vbdev_delay.o 00:03:38.258 SO libspdk_sock_posix.so.6.0 00:03:38.258 CC module/bdev/error/vbdev_error.o 00:03:38.259 SYMLINK libspdk_accel_iaa.so 00:03:38.259 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:38.259 SYMLINK libspdk_fsdev_aio.so 00:03:38.259 CC module/bdev/gpt/gpt.o 00:03:38.259 CC module/bdev/error/vbdev_error_rpc.o 00:03:38.259 LIB libspdk_accel_dsa.a 00:03:38.259 SYMLINK libspdk_sock_posix.so 00:03:38.259 CC module/bdev/gpt/vbdev_gpt.o 00:03:38.259 SO libspdk_accel_dsa.so.5.0 00:03:38.259 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:38.259 CC module/bdev/malloc/bdev_malloc.o 00:03:38.259 SYMLINK libspdk_accel_dsa.so 00:03:38.259 CC module/bdev/lvol/vbdev_lvol.o 00:03:38.259 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:38.518 LIB libspdk_bdev_error.a 00:03:38.518 LIB libspdk_blobfs_bdev.a 00:03:38.518 LIB libspdk_bdev_delay.a 00:03:38.518 SO libspdk_bdev_error.so.6.0 00:03:38.518 SO libspdk_blobfs_bdev.so.6.0 00:03:38.518 SO libspdk_bdev_delay.so.6.0 00:03:38.518 CC module/bdev/null/bdev_null.o 00:03:38.518 LIB libspdk_bdev_gpt.a 00:03:38.518 CC module/bdev/nvme/bdev_nvme.o 00:03:38.518 SYMLINK libspdk_blobfs_bdev.so 00:03:38.518 SYMLINK libspdk_bdev_error.so 00:03:38.518 SO libspdk_bdev_gpt.so.6.0 00:03:38.518 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:38.518 CC module/bdev/nvme/nvme_rpc.o 00:03:38.518 SYMLINK libspdk_bdev_delay.so 00:03:38.518 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:38.518 CC module/bdev/passthru/vbdev_passthru.o 00:03:38.518 CC module/bdev/raid/bdev_raid.o 00:03:38.777 SYMLINK libspdk_bdev_gpt.so 00:03:38.777 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:38.777 LIB libspdk_bdev_malloc.a 00:03:38.777 SO libspdk_bdev_malloc.so.6.0 00:03:38.777 CC module/bdev/null/bdev_null_rpc.o 00:03:38.777 CC module/bdev/nvme/bdev_mdns_client.o 00:03:38.777 CC module/bdev/nvme/vbdev_opal.o 00:03:38.777 SYMLINK libspdk_bdev_malloc.so 00:03:38.777 CC module/bdev/raid/bdev_raid_rpc.o 00:03:39.035 LIB libspdk_bdev_passthru.a 00:03:39.035 SO libspdk_bdev_passthru.so.6.0 00:03:39.035 LIB libspdk_bdev_null.a 00:03:39.035 LIB libspdk_bdev_lvol.a 00:03:39.035 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:39.035 SO libspdk_bdev_null.so.6.0 00:03:39.035 SYMLINK libspdk_bdev_passthru.so 00:03:39.035 CC module/bdev/raid/bdev_raid_sb.o 00:03:39.035 SO libspdk_bdev_lvol.so.6.0 00:03:39.035 CC module/bdev/split/vbdev_split.o 00:03:39.035 SYMLINK libspdk_bdev_null.so 00:03:39.035 SYMLINK libspdk_bdev_lvol.so 00:03:39.035 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:39.035 CC module/bdev/split/vbdev_split_rpc.o 00:03:39.035 CC module/bdev/raid/raid0.o 00:03:39.294 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:39.294 CC module/bdev/raid/raid1.o 00:03:39.294 CC module/bdev/xnvme/bdev_xnvme.o 00:03:39.294 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:39.294 LIB libspdk_bdev_split.a 00:03:39.294 SO libspdk_bdev_split.so.6.0 00:03:39.294 CC module/bdev/aio/bdev_aio.o 00:03:39.294 SYMLINK libspdk_bdev_split.so 00:03:39.294 CC module/bdev/aio/bdev_aio_rpc.o 00:03:39.294 CC module/bdev/raid/concat.o 00:03:39.294 CC module/bdev/ftl/bdev_ftl.o 00:03:39.294 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:39.553 LIB libspdk_bdev_zone_block.a 00:03:39.553 SO libspdk_bdev_zone_block.so.6.0 00:03:39.553 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:39.553 LIB libspdk_bdev_xnvme.a 00:03:39.553 SYMLINK libspdk_bdev_zone_block.so 00:03:39.553 SO libspdk_bdev_xnvme.so.3.0 00:03:39.553 SYMLINK libspdk_bdev_xnvme.so 00:03:39.553 CC module/bdev/iscsi/bdev_iscsi.o 00:03:39.553 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:39.553 LIB libspdk_bdev_aio.a 00:03:39.553 LIB libspdk_bdev_raid.a 00:03:39.553 SO libspdk_bdev_aio.so.6.0 00:03:39.811 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:39.811 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:39.811 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:39.811 LIB libspdk_bdev_ftl.a 00:03:39.811 SYMLINK libspdk_bdev_aio.so 00:03:39.811 SO libspdk_bdev_ftl.so.6.0 00:03:39.811 SO libspdk_bdev_raid.so.6.0 00:03:39.811 SYMLINK libspdk_bdev_ftl.so 00:03:39.811 SYMLINK libspdk_bdev_raid.so 00:03:40.070 LIB libspdk_bdev_iscsi.a 00:03:40.070 SO libspdk_bdev_iscsi.so.6.0 00:03:40.070 SYMLINK libspdk_bdev_iscsi.so 00:03:40.328 LIB libspdk_bdev_virtio.a 00:03:40.328 SO libspdk_bdev_virtio.so.6.0 00:03:40.328 SYMLINK libspdk_bdev_virtio.so 00:03:40.898 LIB libspdk_bdev_nvme.a 00:03:40.898 SO libspdk_bdev_nvme.so.7.1 00:03:40.898 SYMLINK libspdk_bdev_nvme.so 00:03:41.480 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:41.480 CC module/event/subsystems/scheduler/scheduler.o 00:03:41.480 CC module/event/subsystems/iobuf/iobuf.o 00:03:41.480 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:41.480 CC module/event/subsystems/vmd/vmd.o 00:03:41.480 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:41.480 CC module/event/subsystems/keyring/keyring.o 00:03:41.480 CC module/event/subsystems/fsdev/fsdev.o 00:03:41.480 CC module/event/subsystems/sock/sock.o 00:03:41.480 LIB libspdk_event_keyring.a 00:03:41.480 LIB libspdk_event_scheduler.a 00:03:41.480 LIB libspdk_event_vmd.a 00:03:41.480 LIB libspdk_event_fsdev.a 00:03:41.480 LIB libspdk_event_vhost_blk.a 00:03:41.480 LIB libspdk_event_iobuf.a 00:03:41.480 SO libspdk_event_keyring.so.1.0 00:03:41.480 SO libspdk_event_scheduler.so.4.0 00:03:41.480 SO libspdk_event_vhost_blk.so.3.0 00:03:41.480 SO libspdk_event_vmd.so.6.0 00:03:41.480 LIB libspdk_event_sock.a 00:03:41.480 SO libspdk_event_iobuf.so.3.0 00:03:41.480 SO libspdk_event_fsdev.so.1.0 00:03:41.480 SO libspdk_event_sock.so.5.0 00:03:41.480 SYMLINK libspdk_event_keyring.so 00:03:41.480 SYMLINK libspdk_event_scheduler.so 00:03:41.480 SYMLINK libspdk_event_vmd.so 00:03:41.480 SYMLINK libspdk_event_vhost_blk.so 00:03:41.480 SYMLINK libspdk_event_fsdev.so 00:03:41.480 SYMLINK libspdk_event_iobuf.so 00:03:41.480 SYMLINK libspdk_event_sock.so 00:03:41.739 CC module/event/subsystems/accel/accel.o 00:03:41.997 LIB libspdk_event_accel.a 00:03:41.997 SO libspdk_event_accel.so.6.0 00:03:41.997 SYMLINK libspdk_event_accel.so 00:03:42.255 CC module/event/subsystems/bdev/bdev.o 00:03:42.513 LIB libspdk_event_bdev.a 00:03:42.513 SO libspdk_event_bdev.so.6.0 00:03:42.513 SYMLINK libspdk_event_bdev.so 00:03:42.772 CC module/event/subsystems/scsi/scsi.o 00:03:42.772 CC module/event/subsystems/nbd/nbd.o 00:03:42.772 CC module/event/subsystems/ublk/ublk.o 00:03:42.772 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:42.772 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:42.772 LIB libspdk_event_nbd.a 00:03:42.772 LIB libspdk_event_ublk.a 00:03:42.772 LIB libspdk_event_scsi.a 00:03:42.772 SO libspdk_event_nbd.so.6.0 00:03:42.772 SO libspdk_event_scsi.so.6.0 00:03:42.772 SO libspdk_event_ublk.so.3.0 00:03:43.030 SYMLINK libspdk_event_nbd.so 00:03:43.030 SYMLINK libspdk_event_ublk.so 00:03:43.030 SYMLINK libspdk_event_scsi.so 00:03:43.030 LIB libspdk_event_nvmf.a 00:03:43.030 SO libspdk_event_nvmf.so.6.0 00:03:43.030 SYMLINK libspdk_event_nvmf.so 00:03:43.289 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:43.289 CC module/event/subsystems/iscsi/iscsi.o 00:03:43.289 LIB libspdk_event_iscsi.a 00:03:43.289 LIB libspdk_event_vhost_scsi.a 00:03:43.289 SO libspdk_event_vhost_scsi.so.3.0 00:03:43.289 SO libspdk_event_iscsi.so.6.0 00:03:43.289 SYMLINK libspdk_event_vhost_scsi.so 00:03:43.289 SYMLINK libspdk_event_iscsi.so 00:03:43.547 SO libspdk.so.6.0 00:03:43.547 SYMLINK libspdk.so 00:03:43.806 CC app/trace_record/trace_record.o 00:03:43.806 CXX app/trace/trace.o 00:03:43.806 CC app/spdk_lspci/spdk_lspci.o 00:03:43.806 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:43.806 CC app/iscsi_tgt/iscsi_tgt.o 00:03:43.806 CC app/nvmf_tgt/nvmf_main.o 00:03:43.806 CC app/spdk_tgt/spdk_tgt.o 00:03:43.806 CC examples/ioat/perf/perf.o 00:03:43.806 CC examples/util/zipf/zipf.o 00:03:43.806 CC test/thread/poller_perf/poller_perf.o 00:03:43.806 LINK spdk_lspci 00:03:43.806 LINK interrupt_tgt 00:03:44.066 LINK iscsi_tgt 00:03:44.066 LINK nvmf_tgt 00:03:44.066 LINK poller_perf 00:03:44.066 LINK zipf 00:03:44.066 LINK spdk_trace_record 00:03:44.066 LINK spdk_tgt 00:03:44.066 LINK ioat_perf 00:03:44.066 LINK spdk_trace 00:03:44.066 CC app/spdk_nvme_perf/perf.o 00:03:44.066 CC app/spdk_nvme_identify/identify.o 00:03:44.066 CC app/spdk_nvme_discover/discovery_aer.o 00:03:44.066 CC examples/ioat/verify/verify.o 00:03:44.326 CC app/spdk_top/spdk_top.o 00:03:44.326 CC app/spdk_dd/spdk_dd.o 00:03:44.326 CC test/dma/test_dma/test_dma.o 00:03:44.326 CC app/fio/nvme/fio_plugin.o 00:03:44.326 CC test/app/bdev_svc/bdev_svc.o 00:03:44.326 LINK spdk_nvme_discover 00:03:44.326 LINK verify 00:03:44.326 CC examples/thread/thread/thread_ex.o 00:03:44.586 LINK bdev_svc 00:03:44.586 LINK spdk_dd 00:03:44.586 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:44.586 LINK thread 00:03:44.586 CC examples/sock/hello_world/hello_sock.o 00:03:44.847 LINK test_dma 00:03:44.847 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:44.847 LINK spdk_nvme_perf 00:03:44.847 LINK hello_sock 00:03:44.847 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:44.847 LINK spdk_nvme 00:03:44.847 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:44.847 LINK spdk_top 00:03:44.847 LINK spdk_nvme_identify 00:03:44.847 CC examples/vmd/lsvmd/lsvmd.o 00:03:45.105 CC examples/vmd/led/led.o 00:03:45.105 LINK nvme_fuzz 00:03:45.105 CC app/fio/bdev/fio_plugin.o 00:03:45.105 CC test/app/histogram_perf/histogram_perf.o 00:03:45.105 LINK lsvmd 00:03:45.105 CC test/app/jsoncat/jsoncat.o 00:03:45.105 LINK led 00:03:45.105 CC examples/idxd/perf/perf.o 00:03:45.105 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:45.105 LINK histogram_perf 00:03:45.106 LINK jsoncat 00:03:45.364 CC test/app/stub/stub.o 00:03:45.364 LINK vhost_fuzz 00:03:45.364 TEST_HEADER include/spdk/accel.h 00:03:45.364 TEST_HEADER include/spdk/accel_module.h 00:03:45.364 TEST_HEADER include/spdk/assert.h 00:03:45.364 TEST_HEADER include/spdk/barrier.h 00:03:45.364 TEST_HEADER include/spdk/base64.h 00:03:45.364 TEST_HEADER include/spdk/bdev.h 00:03:45.364 TEST_HEADER include/spdk/bdev_module.h 00:03:45.364 TEST_HEADER include/spdk/bdev_zone.h 00:03:45.364 TEST_HEADER include/spdk/bit_array.h 00:03:45.364 TEST_HEADER include/spdk/bit_pool.h 00:03:45.364 TEST_HEADER include/spdk/blob_bdev.h 00:03:45.364 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:45.364 TEST_HEADER include/spdk/blobfs.h 00:03:45.364 TEST_HEADER include/spdk/blob.h 00:03:45.364 TEST_HEADER include/spdk/conf.h 00:03:45.364 TEST_HEADER include/spdk/config.h 00:03:45.364 TEST_HEADER include/spdk/cpuset.h 00:03:45.364 TEST_HEADER include/spdk/crc16.h 00:03:45.364 TEST_HEADER include/spdk/crc32.h 00:03:45.364 TEST_HEADER include/spdk/crc64.h 00:03:45.364 TEST_HEADER include/spdk/dif.h 00:03:45.364 TEST_HEADER include/spdk/dma.h 00:03:45.364 TEST_HEADER include/spdk/endian.h 00:03:45.364 TEST_HEADER include/spdk/env_dpdk.h 00:03:45.364 TEST_HEADER include/spdk/env.h 00:03:45.364 TEST_HEADER include/spdk/event.h 00:03:45.364 TEST_HEADER include/spdk/fd_group.h 00:03:45.364 TEST_HEADER include/spdk/fd.h 00:03:45.364 TEST_HEADER include/spdk/file.h 00:03:45.364 TEST_HEADER include/spdk/fsdev.h 00:03:45.364 TEST_HEADER include/spdk/fsdev_module.h 00:03:45.364 TEST_HEADER include/spdk/ftl.h 00:03:45.364 TEST_HEADER include/spdk/gpt_spec.h 00:03:45.364 TEST_HEADER include/spdk/hexlify.h 00:03:45.364 TEST_HEADER include/spdk/histogram_data.h 00:03:45.364 TEST_HEADER include/spdk/idxd.h 00:03:45.364 TEST_HEADER include/spdk/idxd_spec.h 00:03:45.364 TEST_HEADER include/spdk/init.h 00:03:45.364 TEST_HEADER include/spdk/ioat.h 00:03:45.364 TEST_HEADER include/spdk/ioat_spec.h 00:03:45.364 TEST_HEADER include/spdk/iscsi_spec.h 00:03:45.364 TEST_HEADER include/spdk/json.h 00:03:45.364 TEST_HEADER include/spdk/jsonrpc.h 00:03:45.364 TEST_HEADER include/spdk/keyring.h 00:03:45.364 TEST_HEADER include/spdk/keyring_module.h 00:03:45.364 TEST_HEADER include/spdk/likely.h 00:03:45.364 TEST_HEADER include/spdk/log.h 00:03:45.364 TEST_HEADER include/spdk/lvol.h 00:03:45.364 TEST_HEADER include/spdk/md5.h 00:03:45.364 TEST_HEADER include/spdk/memory.h 00:03:45.364 TEST_HEADER include/spdk/mmio.h 00:03:45.364 TEST_HEADER include/spdk/nbd.h 00:03:45.364 TEST_HEADER include/spdk/net.h 00:03:45.364 TEST_HEADER include/spdk/notify.h 00:03:45.364 TEST_HEADER include/spdk/nvme.h 00:03:45.364 TEST_HEADER include/spdk/nvme_intel.h 00:03:45.364 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:45.364 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:45.364 TEST_HEADER include/spdk/nvme_spec.h 00:03:45.364 TEST_HEADER include/spdk/nvme_zns.h 00:03:45.364 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:45.364 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:45.364 TEST_HEADER include/spdk/nvmf.h 00:03:45.364 TEST_HEADER include/spdk/nvmf_spec.h 00:03:45.364 TEST_HEADER include/spdk/nvmf_transport.h 00:03:45.364 TEST_HEADER include/spdk/opal.h 00:03:45.364 LINK stub 00:03:45.364 TEST_HEADER include/spdk/opal_spec.h 00:03:45.364 TEST_HEADER include/spdk/pci_ids.h 00:03:45.364 TEST_HEADER include/spdk/pipe.h 00:03:45.364 TEST_HEADER include/spdk/queue.h 00:03:45.364 CC test/env/mem_callbacks/mem_callbacks.o 00:03:45.364 TEST_HEADER include/spdk/reduce.h 00:03:45.364 TEST_HEADER include/spdk/rpc.h 00:03:45.364 TEST_HEADER include/spdk/scheduler.h 00:03:45.364 TEST_HEADER include/spdk/scsi.h 00:03:45.364 TEST_HEADER include/spdk/scsi_spec.h 00:03:45.364 TEST_HEADER include/spdk/sock.h 00:03:45.364 TEST_HEADER include/spdk/stdinc.h 00:03:45.364 TEST_HEADER include/spdk/string.h 00:03:45.364 TEST_HEADER include/spdk/thread.h 00:03:45.364 TEST_HEADER include/spdk/trace.h 00:03:45.364 TEST_HEADER include/spdk/trace_parser.h 00:03:45.364 TEST_HEADER include/spdk/tree.h 00:03:45.364 TEST_HEADER include/spdk/ublk.h 00:03:45.364 TEST_HEADER include/spdk/util.h 00:03:45.364 CC test/event/event_perf/event_perf.o 00:03:45.364 TEST_HEADER include/spdk/uuid.h 00:03:45.364 TEST_HEADER include/spdk/version.h 00:03:45.364 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:45.364 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:45.364 TEST_HEADER include/spdk/vhost.h 00:03:45.364 TEST_HEADER include/spdk/vmd.h 00:03:45.364 TEST_HEADER include/spdk/xor.h 00:03:45.364 TEST_HEADER include/spdk/zipf.h 00:03:45.364 CXX test/cpp_headers/accel.o 00:03:45.364 CC app/vhost/vhost.o 00:03:45.364 CC test/event/reactor/reactor.o 00:03:45.364 LINK hello_fsdev 00:03:45.364 LINK idxd_perf 00:03:45.623 LINK spdk_bdev 00:03:45.623 CXX test/cpp_headers/accel_module.o 00:03:45.623 LINK event_perf 00:03:45.623 CXX test/cpp_headers/assert.o 00:03:45.623 LINK reactor 00:03:45.623 CC test/env/vtophys/vtophys.o 00:03:45.623 LINK vhost 00:03:45.623 CXX test/cpp_headers/barrier.o 00:03:45.623 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:45.623 LINK vtophys 00:03:45.881 CC test/env/memory/memory_ut.o 00:03:45.881 CC examples/accel/perf/accel_perf.o 00:03:45.881 CC test/event/reactor_perf/reactor_perf.o 00:03:45.881 CXX test/cpp_headers/base64.o 00:03:45.881 LINK env_dpdk_post_init 00:03:45.881 LINK mem_callbacks 00:03:45.881 LINK reactor_perf 00:03:45.881 CC examples/blob/hello_world/hello_blob.o 00:03:45.881 CXX test/cpp_headers/bdev.o 00:03:45.881 CC examples/nvme/hello_world/hello_world.o 00:03:46.140 CC test/nvme/aer/aer.o 00:03:46.140 CC test/rpc_client/rpc_client_test.o 00:03:46.140 CC test/event/app_repeat/app_repeat.o 00:03:46.140 CXX test/cpp_headers/bdev_module.o 00:03:46.140 LINK accel_perf 00:03:46.140 LINK hello_blob 00:03:46.140 CC test/event/scheduler/scheduler.o 00:03:46.140 LINK hello_world 00:03:46.140 LINK rpc_client_test 00:03:46.140 LINK app_repeat 00:03:46.140 LINK aer 00:03:46.397 CXX test/cpp_headers/bdev_zone.o 00:03:46.397 LINK scheduler 00:03:46.397 CC examples/nvme/reconnect/reconnect.o 00:03:46.397 LINK iscsi_fuzz 00:03:46.397 CC examples/blob/cli/blobcli.o 00:03:46.397 CC test/env/pci/pci_ut.o 00:03:46.397 CC test/nvme/reset/reset.o 00:03:46.397 CXX test/cpp_headers/bit_array.o 00:03:46.397 CC test/nvme/sgl/sgl.o 00:03:46.397 CXX test/cpp_headers/bit_pool.o 00:03:46.397 CC test/accel/dif/dif.o 00:03:46.654 CXX test/cpp_headers/blob_bdev.o 00:03:46.654 LINK reset 00:03:46.654 CC test/nvme/e2edp/nvme_dp.o 00:03:46.654 LINK sgl 00:03:46.654 CC test/nvme/overhead/overhead.o 00:03:46.654 CXX test/cpp_headers/blobfs_bdev.o 00:03:46.654 CXX test/cpp_headers/blobfs.o 00:03:46.654 LINK reconnect 00:03:46.654 LINK pci_ut 00:03:46.912 LINK memory_ut 00:03:46.912 CXX test/cpp_headers/blob.o 00:03:46.912 LINK nvme_dp 00:03:46.912 LINK blobcli 00:03:46.912 LINK overhead 00:03:46.912 CC test/nvme/err_injection/err_injection.o 00:03:46.912 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:46.912 CC test/blobfs/mkfs/mkfs.o 00:03:46.912 CXX test/cpp_headers/conf.o 00:03:46.912 CC test/nvme/reserve/reserve.o 00:03:46.912 CC test/nvme/startup/startup.o 00:03:47.170 LINK err_injection 00:03:47.170 CC test/nvme/simple_copy/simple_copy.o 00:03:47.170 LINK dif 00:03:47.170 CC test/nvme/connect_stress/connect_stress.o 00:03:47.170 CXX test/cpp_headers/config.o 00:03:47.170 LINK mkfs 00:03:47.170 CXX test/cpp_headers/cpuset.o 00:03:47.170 CXX test/cpp_headers/crc16.o 00:03:47.170 LINK startup 00:03:47.170 LINK reserve 00:03:47.170 LINK connect_stress 00:03:47.170 CC test/lvol/esnap/esnap.o 00:03:47.170 CC test/nvme/boot_partition/boot_partition.o 00:03:47.170 LINK simple_copy 00:03:47.170 CXX test/cpp_headers/crc32.o 00:03:47.429 CXX test/cpp_headers/crc64.o 00:03:47.429 CXX test/cpp_headers/dif.o 00:03:47.429 CC test/nvme/compliance/nvme_compliance.o 00:03:47.429 CXX test/cpp_headers/dma.o 00:03:47.429 CC test/nvme/fused_ordering/fused_ordering.o 00:03:47.429 LINK boot_partition 00:03:47.429 LINK nvme_manage 00:03:47.429 CXX test/cpp_headers/endian.o 00:03:47.429 CXX test/cpp_headers/env_dpdk.o 00:03:47.429 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:47.429 CC test/nvme/fdp/fdp.o 00:03:47.429 CC test/nvme/cuse/cuse.o 00:03:47.429 CXX test/cpp_headers/env.o 00:03:47.688 LINK fused_ordering 00:03:47.688 CC examples/nvme/arbitration/arbitration.o 00:03:47.688 CXX test/cpp_headers/event.o 00:03:47.688 CXX test/cpp_headers/fd_group.o 00:03:47.688 CXX test/cpp_headers/fd.o 00:03:47.688 LINK nvme_compliance 00:03:47.688 LINK doorbell_aers 00:03:47.688 CC test/bdev/bdevio/bdevio.o 00:03:47.688 CXX test/cpp_headers/file.o 00:03:47.688 CXX test/cpp_headers/fsdev.o 00:03:47.688 CXX test/cpp_headers/fsdev_module.o 00:03:47.947 LINK fdp 00:03:47.947 CC examples/nvme/hotplug/hotplug.o 00:03:47.947 CXX test/cpp_headers/ftl.o 00:03:47.947 LINK arbitration 00:03:47.947 CC examples/bdev/hello_world/hello_bdev.o 00:03:47.947 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:47.947 CC examples/nvme/abort/abort.o 00:03:47.947 CC examples/bdev/bdevperf/bdevperf.o 00:03:47.947 CXX test/cpp_headers/gpt_spec.o 00:03:48.205 LINK hotplug 00:03:48.205 LINK bdevio 00:03:48.205 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:48.205 CXX test/cpp_headers/hexlify.o 00:03:48.205 LINK hello_bdev 00:03:48.205 LINK cmb_copy 00:03:48.205 CXX test/cpp_headers/histogram_data.o 00:03:48.205 CXX test/cpp_headers/idxd.o 00:03:48.205 CXX test/cpp_headers/idxd_spec.o 00:03:48.205 LINK pmr_persistence 00:03:48.205 CXX test/cpp_headers/init.o 00:03:48.205 CXX test/cpp_headers/ioat.o 00:03:48.463 CXX test/cpp_headers/ioat_spec.o 00:03:48.463 CXX test/cpp_headers/iscsi_spec.o 00:03:48.463 LINK abort 00:03:48.463 CXX test/cpp_headers/json.o 00:03:48.463 CXX test/cpp_headers/jsonrpc.o 00:03:48.463 CXX test/cpp_headers/keyring.o 00:03:48.463 CXX test/cpp_headers/keyring_module.o 00:03:48.463 CXX test/cpp_headers/likely.o 00:03:48.463 CXX test/cpp_headers/log.o 00:03:48.463 CXX test/cpp_headers/lvol.o 00:03:48.463 CXX test/cpp_headers/md5.o 00:03:48.463 CXX test/cpp_headers/memory.o 00:03:48.463 CXX test/cpp_headers/mmio.o 00:03:48.463 CXX test/cpp_headers/nbd.o 00:03:48.722 CXX test/cpp_headers/net.o 00:03:48.722 CXX test/cpp_headers/notify.o 00:03:48.722 CXX test/cpp_headers/nvme.o 00:03:48.722 CXX test/cpp_headers/nvme_intel.o 00:03:48.722 CXX test/cpp_headers/nvme_ocssd.o 00:03:48.722 LINK cuse 00:03:48.722 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:48.722 CXX test/cpp_headers/nvme_spec.o 00:03:48.722 CXX test/cpp_headers/nvme_zns.o 00:03:48.722 CXX test/cpp_headers/nvmf_cmd.o 00:03:48.722 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:48.722 CXX test/cpp_headers/nvmf.o 00:03:48.722 LINK bdevperf 00:03:48.722 CXX test/cpp_headers/nvmf_spec.o 00:03:48.722 CXX test/cpp_headers/nvmf_transport.o 00:03:48.722 CXX test/cpp_headers/opal.o 00:03:48.980 CXX test/cpp_headers/opal_spec.o 00:03:48.980 CXX test/cpp_headers/pci_ids.o 00:03:48.980 CXX test/cpp_headers/pipe.o 00:03:48.980 CXX test/cpp_headers/queue.o 00:03:48.980 CXX test/cpp_headers/reduce.o 00:03:48.980 CXX test/cpp_headers/rpc.o 00:03:48.980 CXX test/cpp_headers/scheduler.o 00:03:48.980 CXX test/cpp_headers/scsi.o 00:03:48.980 CXX test/cpp_headers/scsi_spec.o 00:03:48.980 CXX test/cpp_headers/sock.o 00:03:48.980 CXX test/cpp_headers/stdinc.o 00:03:48.980 CXX test/cpp_headers/string.o 00:03:48.980 CXX test/cpp_headers/thread.o 00:03:48.980 CXX test/cpp_headers/trace.o 00:03:48.980 CXX test/cpp_headers/trace_parser.o 00:03:49.309 CXX test/cpp_headers/tree.o 00:03:49.309 CXX test/cpp_headers/ublk.o 00:03:49.309 CXX test/cpp_headers/util.o 00:03:49.309 CXX test/cpp_headers/uuid.o 00:03:49.309 CC examples/nvmf/nvmf/nvmf.o 00:03:49.309 CXX test/cpp_headers/version.o 00:03:49.309 CXX test/cpp_headers/vfio_user_pci.o 00:03:49.309 CXX test/cpp_headers/vfio_user_spec.o 00:03:49.309 CXX test/cpp_headers/vhost.o 00:03:49.309 CXX test/cpp_headers/vmd.o 00:03:49.309 CXX test/cpp_headers/xor.o 00:03:49.309 CXX test/cpp_headers/zipf.o 00:03:49.309 LINK nvmf 00:03:51.836 LINK esnap 00:03:52.094 00:03:52.094 real 1m9.782s 00:03:52.094 user 6m14.678s 00:03:52.094 sys 1m11.313s 00:03:52.094 19:05:36 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:52.094 19:05:36 make -- common/autotest_common.sh@10 -- $ set +x 00:03:52.094 ************************************ 00:03:52.094 END TEST make 00:03:52.094 ************************************ 00:03:52.094 19:05:36 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:52.094 19:05:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:52.094 19:05:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:52.094 19:05:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.094 19:05:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:52.094 19:05:36 -- pm/common@44 -- $ pid=5071 00:03:52.094 19:05:36 -- pm/common@50 -- $ kill -TERM 5071 00:03:52.094 19:05:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.094 19:05:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:52.094 19:05:36 -- pm/common@44 -- $ pid=5073 00:03:52.094 19:05:36 -- pm/common@50 -- $ kill -TERM 5073 00:03:52.094 19:05:36 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:52.094 19:05:36 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:52.353 19:05:36 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:52.353 19:05:36 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:52.353 19:05:36 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:52.353 19:05:36 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:52.353 19:05:36 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.353 19:05:36 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.353 19:05:36 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.353 19:05:36 -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.353 19:05:36 -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.353 19:05:36 -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.353 19:05:36 -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.353 19:05:36 -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.353 19:05:36 -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.353 19:05:36 -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.353 19:05:36 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.353 19:05:36 -- scripts/common.sh@344 -- # case "$op" in 00:03:52.353 19:05:36 -- scripts/common.sh@345 -- # : 1 00:03:52.353 19:05:36 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.353 19:05:36 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.353 19:05:36 -- scripts/common.sh@365 -- # decimal 1 00:03:52.353 19:05:36 -- scripts/common.sh@353 -- # local d=1 00:03:52.353 19:05:36 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.353 19:05:36 -- scripts/common.sh@355 -- # echo 1 00:03:52.353 19:05:36 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.353 19:05:36 -- scripts/common.sh@366 -- # decimal 2 00:03:52.353 19:05:36 -- scripts/common.sh@353 -- # local d=2 00:03:52.353 19:05:36 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.353 19:05:36 -- scripts/common.sh@355 -- # echo 2 00:03:52.353 19:05:36 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.353 19:05:36 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.353 19:05:36 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.353 19:05:36 -- scripts/common.sh@368 -- # return 0 00:03:52.353 19:05:36 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.353 19:05:36 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:52.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.353 --rc genhtml_branch_coverage=1 00:03:52.353 --rc genhtml_function_coverage=1 00:03:52.353 --rc genhtml_legend=1 00:03:52.353 --rc geninfo_all_blocks=1 00:03:52.353 --rc geninfo_unexecuted_blocks=1 00:03:52.353 00:03:52.353 ' 00:03:52.353 19:05:36 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:52.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.353 --rc genhtml_branch_coverage=1 00:03:52.353 --rc genhtml_function_coverage=1 00:03:52.353 --rc genhtml_legend=1 00:03:52.353 --rc geninfo_all_blocks=1 00:03:52.353 --rc geninfo_unexecuted_blocks=1 00:03:52.353 00:03:52.353 ' 00:03:52.353 19:05:36 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:52.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.353 --rc genhtml_branch_coverage=1 00:03:52.353 --rc genhtml_function_coverage=1 00:03:52.353 --rc genhtml_legend=1 00:03:52.353 --rc geninfo_all_blocks=1 00:03:52.353 --rc geninfo_unexecuted_blocks=1 00:03:52.353 00:03:52.353 ' 00:03:52.353 19:05:36 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:52.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.353 --rc genhtml_branch_coverage=1 00:03:52.353 --rc genhtml_function_coverage=1 00:03:52.353 --rc genhtml_legend=1 00:03:52.353 --rc geninfo_all_blocks=1 00:03:52.353 --rc geninfo_unexecuted_blocks=1 00:03:52.353 00:03:52.353 ' 00:03:52.353 19:05:36 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:52.353 19:05:36 -- nvmf/common.sh@7 -- # uname -s 00:03:52.353 19:05:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:52.353 19:05:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:52.353 19:05:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:52.353 19:05:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:52.353 19:05:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:52.353 19:05:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:52.353 19:05:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:52.353 19:05:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:52.353 19:05:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:52.353 19:05:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:52.353 19:05:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c41e311-20d4-414b-91c1-cda181937799 00:03:52.353 19:05:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=9c41e311-20d4-414b-91c1-cda181937799 00:03:52.353 19:05:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:52.353 19:05:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:52.353 19:05:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:52.353 19:05:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:52.353 19:05:36 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:52.353 19:05:36 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:52.353 19:05:36 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:52.353 19:05:36 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:52.353 19:05:36 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:52.353 19:05:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.353 19:05:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.353 19:05:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.353 19:05:36 -- paths/export.sh@5 -- # export PATH 00:03:52.353 19:05:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.353 19:05:36 -- nvmf/common.sh@51 -- # : 0 00:03:52.353 19:05:36 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:52.353 19:05:36 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:52.353 19:05:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:52.353 19:05:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:52.353 19:05:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:52.353 19:05:36 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:52.353 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:52.353 19:05:36 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:52.353 19:05:36 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:52.353 19:05:36 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:52.353 19:05:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:52.353 19:05:36 -- spdk/autotest.sh@32 -- # uname -s 00:03:52.353 19:05:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:52.353 19:05:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:52.353 19:05:36 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:52.353 19:05:36 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:52.353 19:05:36 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:52.353 19:05:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:52.353 19:05:36 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:52.353 19:05:36 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:52.353 19:05:36 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:52.353 19:05:36 -- spdk/autotest.sh@48 -- # udevadm_pid=56100 00:03:52.353 19:05:36 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:52.353 19:05:36 -- pm/common@17 -- # local monitor 00:03:52.353 19:05:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.353 19:05:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.353 19:05:36 -- pm/common@25 -- # sleep 1 00:03:52.353 19:05:36 -- pm/common@21 -- # date +%s 00:03:52.353 19:05:36 -- pm/common@21 -- # date +%s 00:03:52.353 19:05:36 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734375936 00:03:52.353 19:05:36 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734375936 00:03:52.353 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734375936_collect-vmstat.pm.log 00:03:52.353 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734375936_collect-cpu-load.pm.log 00:03:53.286 19:05:37 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:53.286 19:05:37 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:53.286 19:05:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:53.286 19:05:37 -- common/autotest_common.sh@10 -- # set +x 00:03:53.286 19:05:37 -- spdk/autotest.sh@59 -- # create_test_list 00:03:53.286 19:05:37 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:53.286 19:05:37 -- common/autotest_common.sh@10 -- # set +x 00:03:53.544 19:05:37 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:53.544 19:05:37 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:53.544 19:05:37 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:53.544 19:05:37 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:53.544 19:05:37 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:53.544 19:05:37 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:53.544 19:05:37 -- common/autotest_common.sh@1457 -- # uname 00:03:53.544 19:05:37 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:53.544 19:05:37 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:53.544 19:05:37 -- common/autotest_common.sh@1477 -- # uname 00:03:53.544 19:05:37 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:53.544 19:05:37 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:53.544 19:05:37 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:53.544 lcov: LCOV version 1.15 00:03:53.544 19:05:37 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:08.439 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:08.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:20.743 19:06:05 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:20.743 19:06:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.743 19:06:05 -- common/autotest_common.sh@10 -- # set +x 00:04:20.743 19:06:05 -- spdk/autotest.sh@78 -- # rm -f 00:04:21.004 19:06:05 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.265 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.837 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:21.837 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:21.837 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:21.837 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:21.837 19:06:06 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:21.837 19:06:06 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:21.837 19:06:06 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:21.837 19:06:06 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:21.837 19:06:06 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:21.837 19:06:06 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:21.837 19:06:06 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:21.837 19:06:06 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:21.837 19:06:06 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:21.837 19:06:06 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:21.837 19:06:06 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:21.837 19:06:06 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:21.837 19:06:06 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:21.837 19:06:06 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:21.837 19:06:06 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:04:21.837 19:06:06 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:21.837 19:06:06 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:21.837 19:06:06 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:21.837 19:06:06 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:21.837 19:06:06 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:21.837 19:06:06 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:21.837 19:06:06 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:04:21.837 19:06:06 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:21.837 19:06:06 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:21.837 19:06:06 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:21.837 19:06:06 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:21.837 19:06:06 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:04:21.837 19:06:06 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:21.837 19:06:06 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:21.837 19:06:06 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:21.837 19:06:06 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:21.837 19:06:06 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:04:21.837 19:06:06 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:21.837 19:06:06 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2c2n1 00:04:21.837 19:06:06 -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:04:21.837 19:06:06 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:04:21.837 19:06:06 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:21.837 19:06:06 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:21.837 19:06:06 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:21.837 19:06:06 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:21.837 19:06:06 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:04:21.837 19:06:06 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:21.837 19:06:06 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:21.837 19:06:06 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:21.837 19:06:06 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:21.837 19:06:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:21.837 19:06:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:21.837 19:06:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:21.837 19:06:06 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:21.837 19:06:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:22.099 No valid GPT data, bailing 00:04:22.099 19:06:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:22.099 19:06:06 -- scripts/common.sh@394 -- # pt= 00:04:22.099 19:06:06 -- scripts/common.sh@395 -- # return 1 00:04:22.099 19:06:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:22.099 1+0 records in 00:04:22.099 1+0 records out 00:04:22.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019558 s, 53.6 MB/s 00:04:22.099 19:06:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:22.099 19:06:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:22.099 19:06:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:22.099 19:06:06 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:22.099 19:06:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:22.099 No valid GPT data, bailing 00:04:22.099 19:06:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:22.099 19:06:06 -- scripts/common.sh@394 -- # pt= 00:04:22.099 19:06:06 -- scripts/common.sh@395 -- # return 1 00:04:22.099 19:06:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:22.099 1+0 records in 00:04:22.099 1+0 records out 00:04:22.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00628315 s, 167 MB/s 00:04:22.099 19:06:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:22.099 19:06:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:22.099 19:06:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:22.099 19:06:06 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:22.099 19:06:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:22.099 No valid GPT data, bailing 00:04:22.099 19:06:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:22.361 19:06:06 -- scripts/common.sh@394 -- # pt= 00:04:22.361 19:06:06 -- scripts/common.sh@395 -- # return 1 00:04:22.361 19:06:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:22.361 1+0 records in 00:04:22.361 1+0 records out 00:04:22.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00685072 s, 153 MB/s 00:04:22.361 19:06:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:22.361 19:06:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:22.361 19:06:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:22.361 19:06:06 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:22.361 19:06:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:22.361 No valid GPT data, bailing 00:04:22.361 19:06:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:22.361 19:06:06 -- scripts/common.sh@394 -- # pt= 00:04:22.361 19:06:06 -- scripts/common.sh@395 -- # return 1 00:04:22.361 19:06:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:22.361 1+0 records in 00:04:22.361 1+0 records out 00:04:22.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0066604 s, 157 MB/s 00:04:22.361 19:06:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:22.361 19:06:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:22.361 19:06:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:22.361 19:06:06 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:22.361 19:06:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:22.361 No valid GPT data, bailing 00:04:22.361 19:06:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:22.361 19:06:06 -- scripts/common.sh@394 -- # pt= 00:04:22.361 19:06:06 -- scripts/common.sh@395 -- # return 1 00:04:22.361 19:06:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:22.361 1+0 records in 00:04:22.361 1+0 records out 00:04:22.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0063132 s, 166 MB/s 00:04:22.361 19:06:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:22.361 19:06:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:22.361 19:06:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:22.361 19:06:06 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:22.361 19:06:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:22.361 No valid GPT data, bailing 00:04:22.361 19:06:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:22.361 19:06:06 -- scripts/common.sh@394 -- # pt= 00:04:22.361 19:06:06 -- scripts/common.sh@395 -- # return 1 00:04:22.622 19:06:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:22.622 1+0 records in 00:04:22.622 1+0 records out 00:04:22.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00505804 s, 207 MB/s 00:04:22.622 19:06:06 -- spdk/autotest.sh@105 -- # sync 00:04:22.622 19:06:06 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:22.622 19:06:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:22.622 19:06:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:24.536 19:06:08 -- spdk/autotest.sh@111 -- # uname -s 00:04:24.536 19:06:08 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:24.536 19:06:08 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:24.536 19:06:08 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:25.150 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.412 Hugepages 00:04:25.412 node hugesize free / total 00:04:25.412 node0 1048576kB 0 / 0 00:04:25.412 node0 2048kB 0 / 0 00:04:25.412 00:04:25.412 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:25.412 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:25.674 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:25.674 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:25.674 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:25.674 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:25.674 19:06:09 -- spdk/autotest.sh@117 -- # uname -s 00:04:25.674 19:06:09 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:25.674 19:06:09 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:25.674 19:06:09 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:26.247 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.818 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.818 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.818 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.818 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.079 19:06:11 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:28.024 19:06:12 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:28.024 19:06:12 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:28.024 19:06:12 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:28.024 19:06:12 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:28.024 19:06:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:28.024 19:06:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:28.024 19:06:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:28.024 19:06:12 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:28.024 19:06:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:28.024 19:06:12 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:28.024 19:06:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:28.024 19:06:12 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:28.285 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:28.546 Waiting for block devices as requested 00:04:28.546 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:28.809 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:28.809 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:28.809 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:34.138 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:34.138 19:06:18 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:34.138 19:06:18 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:34.138 19:06:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:34.138 19:06:18 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:34.138 19:06:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:34.138 19:06:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:34.138 19:06:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:34.138 19:06:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:34.138 19:06:18 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:34.138 19:06:18 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:34.138 19:06:18 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:34.138 19:06:18 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:34.138 19:06:18 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:34.138 19:06:18 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:34.138 19:06:18 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:34.138 19:06:18 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:34.138 19:06:18 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:34.138 19:06:18 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:34.138 19:06:18 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:34.138 19:06:18 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:34.138 19:06:18 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:34.138 19:06:18 -- common/autotest_common.sh@1543 -- # continue 00:04:34.138 19:06:18 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:34.138 19:06:18 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:34.138 19:06:18 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:34.138 19:06:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:34.138 19:06:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:34.138 19:06:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:34.138 19:06:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:34.138 19:06:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:34.138 19:06:18 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:34.138 19:06:18 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:34.138 19:06:18 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:34.138 19:06:18 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:34.138 19:06:18 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:34.138 19:06:18 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:34.138 19:06:18 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:34.138 19:06:18 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:34.138 19:06:18 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:34.138 19:06:18 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:34.138 19:06:18 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:34.138 19:06:18 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:34.138 19:06:18 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:34.138 19:06:18 -- common/autotest_common.sh@1543 -- # continue 00:04:34.138 19:06:18 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:34.138 19:06:18 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:34.138 19:06:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:34.138 19:06:18 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:34.138 19:06:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:34.138 19:06:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:34.138 19:06:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:34.138 19:06:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:34.138 19:06:18 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:34.138 19:06:18 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:34.138 19:06:18 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:34.138 19:06:18 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:34.138 19:06:18 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:34.138 19:06:18 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:34.138 19:06:18 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:34.138 19:06:18 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:34.138 19:06:18 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:34.138 19:06:18 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:34.138 19:06:18 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:34.138 19:06:18 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:34.138 19:06:18 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:34.138 19:06:18 -- common/autotest_common.sh@1543 -- # continue 00:04:34.138 19:06:18 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:34.138 19:06:18 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:34.138 19:06:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:34.138 19:06:18 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:34.138 19:06:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:34.138 19:06:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:34.138 19:06:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:34.138 19:06:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:34.138 19:06:18 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:34.138 19:06:18 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:34.138 19:06:18 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:34.138 19:06:18 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:34.138 19:06:18 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:34.138 19:06:18 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:34.138 19:06:18 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:34.138 19:06:18 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:34.138 19:06:18 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:34.138 19:06:18 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:34.138 19:06:18 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:34.138 19:06:18 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:34.138 19:06:18 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:34.138 19:06:18 -- common/autotest_common.sh@1543 -- # continue 00:04:34.138 19:06:18 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:34.138 19:06:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:34.138 19:06:18 -- common/autotest_common.sh@10 -- # set +x 00:04:34.138 19:06:18 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:34.138 19:06:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:34.138 19:06:18 -- common/autotest_common.sh@10 -- # set +x 00:04:34.138 19:06:18 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:34.710 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.282 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.282 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.282 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.282 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.543 19:06:19 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:35.544 19:06:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:35.544 19:06:19 -- common/autotest_common.sh@10 -- # set +x 00:04:35.544 19:06:19 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:35.544 19:06:19 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:35.544 19:06:19 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:35.544 19:06:19 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:35.544 19:06:19 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:35.544 19:06:19 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:35.544 19:06:19 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:35.544 19:06:19 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:35.544 19:06:19 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:35.544 19:06:19 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:35.544 19:06:19 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:35.544 19:06:19 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:35.544 19:06:19 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:35.544 19:06:19 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:35.544 19:06:19 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:35.544 19:06:19 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:35.544 19:06:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:35.544 19:06:19 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:35.544 19:06:19 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:35.544 19:06:19 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:35.544 19:06:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:35.544 19:06:19 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:35.544 19:06:19 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:35.544 19:06:19 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:35.544 19:06:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:35.544 19:06:19 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:35.544 19:06:19 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:35.544 19:06:19 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:35.544 19:06:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:35.544 19:06:19 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:35.544 19:06:19 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:35.544 19:06:19 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:35.544 19:06:19 -- common/autotest_common.sh@1572 -- # return 0 00:04:35.544 19:06:19 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:35.544 19:06:19 -- common/autotest_common.sh@1580 -- # return 0 00:04:35.544 19:06:19 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:35.544 19:06:19 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:35.544 19:06:19 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:35.544 19:06:19 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:35.544 19:06:19 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:35.544 19:06:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.544 19:06:19 -- common/autotest_common.sh@10 -- # set +x 00:04:35.544 19:06:19 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:35.544 19:06:19 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:35.544 19:06:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.544 19:06:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.544 19:06:19 -- common/autotest_common.sh@10 -- # set +x 00:04:35.544 ************************************ 00:04:35.544 START TEST env 00:04:35.544 ************************************ 00:04:35.544 19:06:19 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:35.544 * Looking for test storage... 00:04:35.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:35.544 19:06:19 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:35.544 19:06:19 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:35.544 19:06:19 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:35.805 19:06:19 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:35.805 19:06:19 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.805 19:06:19 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.805 19:06:19 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.805 19:06:19 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.805 19:06:19 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.805 19:06:19 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.805 19:06:19 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.805 19:06:19 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.805 19:06:19 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.805 19:06:19 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.805 19:06:19 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.805 19:06:19 env -- scripts/common.sh@344 -- # case "$op" in 00:04:35.805 19:06:19 env -- scripts/common.sh@345 -- # : 1 00:04:35.805 19:06:19 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.805 19:06:19 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.805 19:06:19 env -- scripts/common.sh@365 -- # decimal 1 00:04:35.805 19:06:19 env -- scripts/common.sh@353 -- # local d=1 00:04:35.805 19:06:19 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.805 19:06:19 env -- scripts/common.sh@355 -- # echo 1 00:04:35.805 19:06:19 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.805 19:06:19 env -- scripts/common.sh@366 -- # decimal 2 00:04:35.805 19:06:19 env -- scripts/common.sh@353 -- # local d=2 00:04:35.805 19:06:19 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.805 19:06:19 env -- scripts/common.sh@355 -- # echo 2 00:04:35.805 19:06:19 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.805 19:06:19 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.805 19:06:19 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.805 19:06:19 env -- scripts/common.sh@368 -- # return 0 00:04:35.805 19:06:19 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.805 19:06:19 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:35.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.805 --rc genhtml_branch_coverage=1 00:04:35.805 --rc genhtml_function_coverage=1 00:04:35.805 --rc genhtml_legend=1 00:04:35.805 --rc geninfo_all_blocks=1 00:04:35.805 --rc geninfo_unexecuted_blocks=1 00:04:35.805 00:04:35.805 ' 00:04:35.805 19:06:19 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:35.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.805 --rc genhtml_branch_coverage=1 00:04:35.805 --rc genhtml_function_coverage=1 00:04:35.805 --rc genhtml_legend=1 00:04:35.805 --rc geninfo_all_blocks=1 00:04:35.805 --rc geninfo_unexecuted_blocks=1 00:04:35.805 00:04:35.805 ' 00:04:35.805 19:06:19 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:35.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.805 --rc genhtml_branch_coverage=1 00:04:35.805 --rc genhtml_function_coverage=1 00:04:35.805 --rc genhtml_legend=1 00:04:35.805 --rc geninfo_all_blocks=1 00:04:35.805 --rc geninfo_unexecuted_blocks=1 00:04:35.805 00:04:35.805 ' 00:04:35.805 19:06:19 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:35.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.805 --rc genhtml_branch_coverage=1 00:04:35.805 --rc genhtml_function_coverage=1 00:04:35.805 --rc genhtml_legend=1 00:04:35.805 --rc geninfo_all_blocks=1 00:04:35.805 --rc geninfo_unexecuted_blocks=1 00:04:35.805 00:04:35.805 ' 00:04:35.805 19:06:19 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:35.806 19:06:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.806 19:06:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.806 19:06:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.806 ************************************ 00:04:35.806 START TEST env_memory 00:04:35.806 ************************************ 00:04:35.806 19:06:19 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:35.806 00:04:35.806 00:04:35.806 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.806 http://cunit.sourceforge.net/ 00:04:35.806 00:04:35.806 00:04:35.806 Suite: memory 00:04:35.806 Test: alloc and free memory map ...[2024-12-16 19:06:20.031883] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:35.806 passed 00:04:35.806 Test: mem map translation ...[2024-12-16 19:06:20.071088] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:35.806 [2024-12-16 19:06:20.071259] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:35.806 [2024-12-16 19:06:20.071384] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:35.806 [2024-12-16 19:06:20.071424] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:35.806 passed 00:04:35.806 Test: mem map registration ...[2024-12-16 19:06:20.139791] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:35.806 [2024-12-16 19:06:20.139937] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:36.067 passed 00:04:36.067 Test: mem map adjacent registrations ...passed 00:04:36.067 00:04:36.067 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.067 suites 1 1 n/a 0 0 00:04:36.067 tests 4 4 4 0 0 00:04:36.067 asserts 152 152 152 0 n/a 00:04:36.067 00:04:36.067 Elapsed time = 0.233 seconds 00:04:36.067 00:04:36.067 ************************************ 00:04:36.067 END TEST env_memory 00:04:36.067 ************************************ 00:04:36.067 real 0m0.268s 00:04:36.067 user 0m0.239s 00:04:36.067 sys 0m0.021s 00:04:36.067 19:06:20 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.067 19:06:20 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:36.067 19:06:20 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:36.067 19:06:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.067 19:06:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.067 19:06:20 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.067 ************************************ 00:04:36.067 START TEST env_vtophys 00:04:36.067 ************************************ 00:04:36.067 19:06:20 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:36.067 EAL: lib.eal log level changed from notice to debug 00:04:36.067 EAL: Detected lcore 0 as core 0 on socket 0 00:04:36.067 EAL: Detected lcore 1 as core 0 on socket 0 00:04:36.067 EAL: Detected lcore 2 as core 0 on socket 0 00:04:36.067 EAL: Detected lcore 3 as core 0 on socket 0 00:04:36.067 EAL: Detected lcore 4 as core 0 on socket 0 00:04:36.067 EAL: Detected lcore 5 as core 0 on socket 0 00:04:36.067 EAL: Detected lcore 6 as core 0 on socket 0 00:04:36.067 EAL: Detected lcore 7 as core 0 on socket 0 00:04:36.067 EAL: Detected lcore 8 as core 0 on socket 0 00:04:36.067 EAL: Detected lcore 9 as core 0 on socket 0 00:04:36.067 EAL: Maximum logical cores by configuration: 128 00:04:36.067 EAL: Detected CPU lcores: 10 00:04:36.067 EAL: Detected NUMA nodes: 1 00:04:36.067 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:36.067 EAL: Detected shared linkage of DPDK 00:04:36.067 EAL: No shared files mode enabled, IPC will be disabled 00:04:36.067 EAL: Selected IOVA mode 'PA' 00:04:36.067 EAL: Probing VFIO support... 00:04:36.067 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:36.067 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:36.067 EAL: Ask a virtual area of 0x2e000 bytes 00:04:36.067 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:36.067 EAL: Setting up physically contiguous memory... 00:04:36.067 EAL: Setting maximum number of open files to 524288 00:04:36.067 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:36.067 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:36.067 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.067 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:36.067 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.067 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.067 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:36.067 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:36.067 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.067 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:36.067 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.067 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.067 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:36.067 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:36.067 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.067 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:36.067 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.067 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.067 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:36.067 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:36.067 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.067 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:36.067 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.067 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.067 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:36.068 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:36.068 EAL: Hugepages will be freed exactly as allocated. 00:04:36.068 EAL: No shared files mode enabled, IPC is disabled 00:04:36.068 EAL: No shared files mode enabled, IPC is disabled 00:04:36.329 EAL: TSC frequency is ~2600000 KHz 00:04:36.329 EAL: Main lcore 0 is ready (tid=7f67d22d3a40;cpuset=[0]) 00:04:36.329 EAL: Trying to obtain current memory policy. 00:04:36.329 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.329 EAL: Restoring previous memory policy: 0 00:04:36.329 EAL: request: mp_malloc_sync 00:04:36.329 EAL: No shared files mode enabled, IPC is disabled 00:04:36.329 EAL: Heap on socket 0 was expanded by 2MB 00:04:36.330 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:36.330 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:36.330 EAL: Mem event callback 'spdk:(nil)' registered 00:04:36.330 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:36.330 00:04:36.330 00:04:36.330 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.330 http://cunit.sourceforge.net/ 00:04:36.330 00:04:36.330 00:04:36.330 Suite: components_suite 00:04:36.591 Test: vtophys_malloc_test ...passed 00:04:36.591 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:36.591 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.591 EAL: Restoring previous memory policy: 4 00:04:36.591 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.591 EAL: request: mp_malloc_sync 00:04:36.591 EAL: No shared files mode enabled, IPC is disabled 00:04:36.591 EAL: Heap on socket 0 was expanded by 4MB 00:04:36.591 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.591 EAL: request: mp_malloc_sync 00:04:36.591 EAL: No shared files mode enabled, IPC is disabled 00:04:36.591 EAL: Heap on socket 0 was shrunk by 4MB 00:04:36.591 EAL: Trying to obtain current memory policy. 00:04:36.591 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.591 EAL: Restoring previous memory policy: 4 00:04:36.591 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.591 EAL: request: mp_malloc_sync 00:04:36.591 EAL: No shared files mode enabled, IPC is disabled 00:04:36.591 EAL: Heap on socket 0 was expanded by 6MB 00:04:36.591 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.591 EAL: request: mp_malloc_sync 00:04:36.591 EAL: No shared files mode enabled, IPC is disabled 00:04:36.591 EAL: Heap on socket 0 was shrunk by 6MB 00:04:36.591 EAL: Trying to obtain current memory policy. 00:04:36.591 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.591 EAL: Restoring previous memory policy: 4 00:04:36.591 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.591 EAL: request: mp_malloc_sync 00:04:36.591 EAL: No shared files mode enabled, IPC is disabled 00:04:36.591 EAL: Heap on socket 0 was expanded by 10MB 00:04:36.591 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.591 EAL: request: mp_malloc_sync 00:04:36.591 EAL: No shared files mode enabled, IPC is disabled 00:04:36.591 EAL: Heap on socket 0 was shrunk by 10MB 00:04:36.591 EAL: Trying to obtain current memory policy. 00:04:36.591 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.591 EAL: Restoring previous memory policy: 4 00:04:36.591 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.591 EAL: request: mp_malloc_sync 00:04:36.591 EAL: No shared files mode enabled, IPC is disabled 00:04:36.591 EAL: Heap on socket 0 was expanded by 18MB 00:04:36.853 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.853 EAL: request: mp_malloc_sync 00:04:36.853 EAL: No shared files mode enabled, IPC is disabled 00:04:36.853 EAL: Heap on socket 0 was shrunk by 18MB 00:04:36.853 EAL: Trying to obtain current memory policy. 00:04:36.853 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.853 EAL: Restoring previous memory policy: 4 00:04:36.853 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.853 EAL: request: mp_malloc_sync 00:04:36.853 EAL: No shared files mode enabled, IPC is disabled 00:04:36.853 EAL: Heap on socket 0 was expanded by 34MB 00:04:36.853 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.853 EAL: request: mp_malloc_sync 00:04:36.853 EAL: No shared files mode enabled, IPC is disabled 00:04:36.853 EAL: Heap on socket 0 was shrunk by 34MB 00:04:36.853 EAL: Trying to obtain current memory policy. 00:04:36.853 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.853 EAL: Restoring previous memory policy: 4 00:04:36.853 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.853 EAL: request: mp_malloc_sync 00:04:36.853 EAL: No shared files mode enabled, IPC is disabled 00:04:36.853 EAL: Heap on socket 0 was expanded by 66MB 00:04:36.853 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.853 EAL: request: mp_malloc_sync 00:04:36.853 EAL: No shared files mode enabled, IPC is disabled 00:04:36.853 EAL: Heap on socket 0 was shrunk by 66MB 00:04:37.114 EAL: Trying to obtain current memory policy. 00:04:37.114 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.114 EAL: Restoring previous memory policy: 4 00:04:37.114 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.114 EAL: request: mp_malloc_sync 00:04:37.114 EAL: No shared files mode enabled, IPC is disabled 00:04:37.114 EAL: Heap on socket 0 was expanded by 130MB 00:04:37.114 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.114 EAL: request: mp_malloc_sync 00:04:37.114 EAL: No shared files mode enabled, IPC is disabled 00:04:37.114 EAL: Heap on socket 0 was shrunk by 130MB 00:04:37.375 EAL: Trying to obtain current memory policy. 00:04:37.375 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.375 EAL: Restoring previous memory policy: 4 00:04:37.375 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.375 EAL: request: mp_malloc_sync 00:04:37.375 EAL: No shared files mode enabled, IPC is disabled 00:04:37.375 EAL: Heap on socket 0 was expanded by 258MB 00:04:37.636 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.636 EAL: request: mp_malloc_sync 00:04:37.636 EAL: No shared files mode enabled, IPC is disabled 00:04:37.636 EAL: Heap on socket 0 was shrunk by 258MB 00:04:37.897 EAL: Trying to obtain current memory policy. 00:04:37.897 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.159 EAL: Restoring previous memory policy: 4 00:04:38.159 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.159 EAL: request: mp_malloc_sync 00:04:38.159 EAL: No shared files mode enabled, IPC is disabled 00:04:38.159 EAL: Heap on socket 0 was expanded by 514MB 00:04:38.730 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.730 EAL: request: mp_malloc_sync 00:04:38.730 EAL: No shared files mode enabled, IPC is disabled 00:04:38.730 EAL: Heap on socket 0 was shrunk by 514MB 00:04:39.297 EAL: Trying to obtain current memory policy. 00:04:39.297 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.297 EAL: Restoring previous memory policy: 4 00:04:39.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.297 EAL: request: mp_malloc_sync 00:04:39.297 EAL: No shared files mode enabled, IPC is disabled 00:04:39.297 EAL: Heap on socket 0 was expanded by 1026MB 00:04:40.231 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.489 EAL: request: mp_malloc_sync 00:04:40.489 EAL: No shared files mode enabled, IPC is disabled 00:04:40.489 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:41.056 passed 00:04:41.056 00:04:41.056 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.056 suites 1 1 n/a 0 0 00:04:41.056 tests 2 2 2 0 0 00:04:41.056 asserts 5775 5775 5775 0 n/a 00:04:41.056 00:04:41.056 Elapsed time = 4.840 seconds 00:04:41.056 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.056 EAL: request: mp_malloc_sync 00:04:41.056 EAL: No shared files mode enabled, IPC is disabled 00:04:41.056 EAL: Heap on socket 0 was shrunk by 2MB 00:04:41.056 EAL: No shared files mode enabled, IPC is disabled 00:04:41.056 EAL: No shared files mode enabled, IPC is disabled 00:04:41.056 EAL: No shared files mode enabled, IPC is disabled 00:04:41.314 00:04:41.314 real 0m5.119s 00:04:41.314 user 0m4.151s 00:04:41.314 sys 0m0.816s 00:04:41.314 19:06:25 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.314 19:06:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:41.314 ************************************ 00:04:41.314 END TEST env_vtophys 00:04:41.314 ************************************ 00:04:41.314 19:06:25 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:41.314 19:06:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.314 19:06:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.314 19:06:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.314 ************************************ 00:04:41.314 START TEST env_pci 00:04:41.314 ************************************ 00:04:41.314 19:06:25 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:41.314 00:04:41.314 00:04:41.314 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.314 http://cunit.sourceforge.net/ 00:04:41.314 00:04:41.314 00:04:41.314 Suite: pci 00:04:41.314 Test: pci_hook ...[2024-12-16 19:06:25.517045] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58853 has claimed it 00:04:41.314 passed 00:04:41.314 00:04:41.314 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.314 suites 1 1 n/a 0 0 00:04:41.314 tests 1 1 1 0 0 00:04:41.314 asserts 25 25 25 0 n/a 00:04:41.314 00:04:41.314 Elapsed time = 0.004 seconds 00:04:41.314 EAL: Cannot find device (10000:00:01.0) 00:04:41.314 EAL: Failed to attach device on primary process 00:04:41.314 ************************************ 00:04:41.314 END TEST env_pci 00:04:41.314 00:04:41.314 real 0m0.063s 00:04:41.314 user 0m0.033s 00:04:41.314 sys 0m0.029s 00:04:41.314 19:06:25 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.314 19:06:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:41.314 ************************************ 00:04:41.314 19:06:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:41.314 19:06:25 env -- env/env.sh@15 -- # uname 00:04:41.314 19:06:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:41.314 19:06:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:41.314 19:06:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:41.314 19:06:25 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:41.314 19:06:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.314 19:06:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.314 ************************************ 00:04:41.314 START TEST env_dpdk_post_init 00:04:41.314 ************************************ 00:04:41.314 19:06:25 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:41.314 EAL: Detected CPU lcores: 10 00:04:41.314 EAL: Detected NUMA nodes: 1 00:04:41.314 EAL: Detected shared linkage of DPDK 00:04:41.572 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:41.572 EAL: Selected IOVA mode 'PA' 00:04:41.572 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:41.572 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:41.572 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:41.572 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:41.572 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:41.572 Starting DPDK initialization... 00:04:41.572 Starting SPDK post initialization... 00:04:41.572 SPDK NVMe probe 00:04:41.572 Attaching to 0000:00:10.0 00:04:41.572 Attaching to 0000:00:11.0 00:04:41.572 Attaching to 0000:00:12.0 00:04:41.572 Attaching to 0000:00:13.0 00:04:41.572 Attached to 0000:00:10.0 00:04:41.572 Attached to 0000:00:11.0 00:04:41.572 Attached to 0000:00:13.0 00:04:41.572 Attached to 0000:00:12.0 00:04:41.572 Cleaning up... 00:04:41.572 00:04:41.572 real 0m0.228s 00:04:41.572 user 0m0.064s 00:04:41.572 sys 0m0.065s 00:04:41.572 19:06:25 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.572 ************************************ 00:04:41.572 END TEST env_dpdk_post_init 00:04:41.572 ************************************ 00:04:41.572 19:06:25 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.572 19:06:25 env -- env/env.sh@26 -- # uname 00:04:41.572 19:06:25 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:41.572 19:06:25 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:41.572 19:06:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.572 19:06:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.572 19:06:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.572 ************************************ 00:04:41.572 START TEST env_mem_callbacks 00:04:41.572 ************************************ 00:04:41.572 19:06:25 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:41.830 EAL: Detected CPU lcores: 10 00:04:41.830 EAL: Detected NUMA nodes: 1 00:04:41.830 EAL: Detected shared linkage of DPDK 00:04:41.830 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:41.830 EAL: Selected IOVA mode 'PA' 00:04:41.830 00:04:41.830 00:04:41.830 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.830 http://cunit.sourceforge.net/ 00:04:41.830 00:04:41.830 00:04:41.830 Suite: memory 00:04:41.830 Test: test ... 00:04:41.830 register 0x200000200000 2097152 00:04:41.830 malloc 3145728 00:04:41.830 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:41.830 register 0x200000400000 4194304 00:04:41.830 buf 0x2000004fffc0 len 3145728 PASSED 00:04:41.830 malloc 64 00:04:41.830 buf 0x2000004ffec0 len 64 PASSED 00:04:41.830 malloc 4194304 00:04:41.830 register 0x200000800000 6291456 00:04:41.830 buf 0x2000009fffc0 len 4194304 PASSED 00:04:41.830 free 0x2000004fffc0 3145728 00:04:41.830 free 0x2000004ffec0 64 00:04:41.830 unregister 0x200000400000 4194304 PASSED 00:04:41.830 free 0x2000009fffc0 4194304 00:04:41.830 unregister 0x200000800000 6291456 PASSED 00:04:41.830 malloc 8388608 00:04:41.830 register 0x200000400000 10485760 00:04:41.830 buf 0x2000005fffc0 len 8388608 PASSED 00:04:41.830 free 0x2000005fffc0 8388608 00:04:41.830 unregister 0x200000400000 10485760 PASSED 00:04:41.830 passed 00:04:41.830 00:04:41.830 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.830 suites 1 1 n/a 0 0 00:04:41.830 tests 1 1 1 0 0 00:04:41.830 asserts 15 15 15 0 n/a 00:04:41.830 00:04:41.830 Elapsed time = 0.038 seconds 00:04:41.830 00:04:41.830 real 0m0.205s 00:04:41.830 user 0m0.059s 00:04:41.830 sys 0m0.044s 00:04:41.830 19:06:26 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.830 ************************************ 00:04:41.830 END TEST env_mem_callbacks 00:04:41.830 ************************************ 00:04:41.830 19:06:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:41.830 00:04:41.830 real 0m6.338s 00:04:41.830 user 0m4.696s 00:04:41.830 sys 0m1.195s 00:04:41.830 ************************************ 00:04:41.830 END TEST env 00:04:41.830 ************************************ 00:04:41.830 19:06:26 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.830 19:06:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.830 19:06:26 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:41.830 19:06:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.830 19:06:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.830 19:06:26 -- common/autotest_common.sh@10 -- # set +x 00:04:42.088 ************************************ 00:04:42.088 START TEST rpc 00:04:42.088 ************************************ 00:04:42.088 19:06:26 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:42.088 * Looking for test storage... 00:04:42.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:42.088 19:06:26 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:42.088 19:06:26 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:42.088 19:06:26 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:42.088 19:06:26 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:42.088 19:06:26 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.088 19:06:26 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.088 19:06:26 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.088 19:06:26 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.088 19:06:26 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.088 19:06:26 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.088 19:06:26 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.088 19:06:26 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.088 19:06:26 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.088 19:06:26 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.088 19:06:26 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.088 19:06:26 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:42.088 19:06:26 rpc -- scripts/common.sh@345 -- # : 1 00:04:42.088 19:06:26 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.088 19:06:26 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.088 19:06:26 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:42.088 19:06:26 rpc -- scripts/common.sh@353 -- # local d=1 00:04:42.088 19:06:26 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.088 19:06:26 rpc -- scripts/common.sh@355 -- # echo 1 00:04:42.088 19:06:26 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.088 19:06:26 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:42.088 19:06:26 rpc -- scripts/common.sh@353 -- # local d=2 00:04:42.088 19:06:26 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.088 19:06:26 rpc -- scripts/common.sh@355 -- # echo 2 00:04:42.088 19:06:26 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.088 19:06:26 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.088 19:06:26 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.088 19:06:26 rpc -- scripts/common.sh@368 -- # return 0 00:04:42.088 19:06:26 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.088 19:06:26 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:42.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.088 --rc genhtml_branch_coverage=1 00:04:42.088 --rc genhtml_function_coverage=1 00:04:42.088 --rc genhtml_legend=1 00:04:42.088 --rc geninfo_all_blocks=1 00:04:42.088 --rc geninfo_unexecuted_blocks=1 00:04:42.088 00:04:42.088 ' 00:04:42.088 19:06:26 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:42.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.088 --rc genhtml_branch_coverage=1 00:04:42.088 --rc genhtml_function_coverage=1 00:04:42.088 --rc genhtml_legend=1 00:04:42.088 --rc geninfo_all_blocks=1 00:04:42.088 --rc geninfo_unexecuted_blocks=1 00:04:42.088 00:04:42.088 ' 00:04:42.088 19:06:26 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:42.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.089 --rc genhtml_branch_coverage=1 00:04:42.089 --rc genhtml_function_coverage=1 00:04:42.089 --rc genhtml_legend=1 00:04:42.089 --rc geninfo_all_blocks=1 00:04:42.089 --rc geninfo_unexecuted_blocks=1 00:04:42.089 00:04:42.089 ' 00:04:42.089 19:06:26 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:42.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.089 --rc genhtml_branch_coverage=1 00:04:42.089 --rc genhtml_function_coverage=1 00:04:42.089 --rc genhtml_legend=1 00:04:42.089 --rc geninfo_all_blocks=1 00:04:42.089 --rc geninfo_unexecuted_blocks=1 00:04:42.089 00:04:42.089 ' 00:04:42.089 19:06:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58980 00:04:42.089 19:06:26 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:42.089 19:06:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.089 19:06:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58980 00:04:42.089 19:06:26 rpc -- common/autotest_common.sh@835 -- # '[' -z 58980 ']' 00:04:42.089 19:06:26 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.089 19:06:26 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.089 19:06:26 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.089 19:06:26 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.089 19:06:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.089 [2024-12-16 19:06:26.405517] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:42.089 [2024-12-16 19:06:26.405632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58980 ] 00:04:42.346 [2024-12-16 19:06:26.560282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.346 [2024-12-16 19:06:26.642358] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:42.346 [2024-12-16 19:06:26.642402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58980' to capture a snapshot of events at runtime. 00:04:42.346 [2024-12-16 19:06:26.642410] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:42.346 [2024-12-16 19:06:26.642418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:42.346 [2024-12-16 19:06:26.642424] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58980 for offline analysis/debug. 00:04:42.346 [2024-12-16 19:06:26.643096] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.911 19:06:27 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.911 19:06:27 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:42.911 19:06:27 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:42.912 19:06:27 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:42.912 19:06:27 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:42.912 19:06:27 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:42.912 19:06:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.912 19:06:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.912 19:06:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.912 ************************************ 00:04:42.912 START TEST rpc_integrity 00:04:42.912 ************************************ 00:04:42.912 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:42.912 19:06:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:42.912 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.912 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.170 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.170 19:06:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.170 19:06:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:43.170 19:06:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:43.170 19:06:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.170 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.170 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.170 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.170 19:06:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:43.170 19:06:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:43.170 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.170 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.170 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.170 19:06:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:43.170 { 00:04:43.170 "name": "Malloc0", 00:04:43.170 "aliases": [ 00:04:43.170 "4cec277e-42d7-45cf-b0f9-5f3bc9a2f969" 00:04:43.170 ], 00:04:43.170 "product_name": "Malloc disk", 00:04:43.170 "block_size": 512, 00:04:43.170 "num_blocks": 16384, 00:04:43.170 "uuid": "4cec277e-42d7-45cf-b0f9-5f3bc9a2f969", 00:04:43.170 "assigned_rate_limits": { 00:04:43.170 "rw_ios_per_sec": 0, 00:04:43.170 "rw_mbytes_per_sec": 0, 00:04:43.170 "r_mbytes_per_sec": 0, 00:04:43.170 "w_mbytes_per_sec": 0 00:04:43.170 }, 00:04:43.170 "claimed": false, 00:04:43.170 "zoned": false, 00:04:43.170 "supported_io_types": { 00:04:43.170 "read": true, 00:04:43.170 "write": true, 00:04:43.170 "unmap": true, 00:04:43.170 "flush": true, 00:04:43.170 "reset": true, 00:04:43.170 "nvme_admin": false, 00:04:43.170 "nvme_io": false, 00:04:43.170 "nvme_io_md": false, 00:04:43.170 "write_zeroes": true, 00:04:43.170 "zcopy": true, 00:04:43.170 "get_zone_info": false, 00:04:43.170 "zone_management": false, 00:04:43.170 "zone_append": false, 00:04:43.170 "compare": false, 00:04:43.170 "compare_and_write": false, 00:04:43.170 "abort": true, 00:04:43.170 "seek_hole": false, 00:04:43.170 "seek_data": false, 00:04:43.170 "copy": true, 00:04:43.170 "nvme_iov_md": false 00:04:43.170 }, 00:04:43.170 "memory_domains": [ 00:04:43.170 { 00:04:43.170 "dma_device_id": "system", 00:04:43.170 "dma_device_type": 1 00:04:43.170 }, 00:04:43.170 { 00:04:43.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.170 "dma_device_type": 2 00:04:43.170 } 00:04:43.170 ], 00:04:43.170 "driver_specific": {} 00:04:43.170 } 00:04:43.170 ]' 00:04:43.170 19:06:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:43.170 19:06:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:43.170 19:06:27 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:43.170 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.170 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.170 [2024-12-16 19:06:27.363376] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:43.170 [2024-12-16 19:06:27.363426] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:43.170 [2024-12-16 19:06:27.363446] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:43.170 [2024-12-16 19:06:27.363455] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:43.170 [2024-12-16 19:06:27.365395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:43.170 [2024-12-16 19:06:27.365431] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:43.170 Passthru0 00:04:43.170 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.170 19:06:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:43.170 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.170 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.170 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.170 19:06:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:43.170 { 00:04:43.170 "name": "Malloc0", 00:04:43.170 "aliases": [ 00:04:43.170 "4cec277e-42d7-45cf-b0f9-5f3bc9a2f969" 00:04:43.170 ], 00:04:43.170 "product_name": "Malloc disk", 00:04:43.170 "block_size": 512, 00:04:43.170 "num_blocks": 16384, 00:04:43.170 "uuid": "4cec277e-42d7-45cf-b0f9-5f3bc9a2f969", 00:04:43.170 "assigned_rate_limits": { 00:04:43.170 "rw_ios_per_sec": 0, 00:04:43.170 "rw_mbytes_per_sec": 0, 00:04:43.170 "r_mbytes_per_sec": 0, 00:04:43.171 "w_mbytes_per_sec": 0 00:04:43.171 }, 00:04:43.171 "claimed": true, 00:04:43.171 "claim_type": "exclusive_write", 00:04:43.171 "zoned": false, 00:04:43.171 "supported_io_types": { 00:04:43.171 "read": true, 00:04:43.171 "write": true, 00:04:43.171 "unmap": true, 00:04:43.171 "flush": true, 00:04:43.171 "reset": true, 00:04:43.171 "nvme_admin": false, 00:04:43.171 "nvme_io": false, 00:04:43.171 "nvme_io_md": false, 00:04:43.171 "write_zeroes": true, 00:04:43.171 "zcopy": true, 00:04:43.171 "get_zone_info": false, 00:04:43.171 "zone_management": false, 00:04:43.171 "zone_append": false, 00:04:43.171 "compare": false, 00:04:43.171 "compare_and_write": false, 00:04:43.171 "abort": true, 00:04:43.171 "seek_hole": false, 00:04:43.171 "seek_data": false, 00:04:43.171 "copy": true, 00:04:43.171 "nvme_iov_md": false 00:04:43.171 }, 00:04:43.171 "memory_domains": [ 00:04:43.171 { 00:04:43.171 "dma_device_id": "system", 00:04:43.171 "dma_device_type": 1 00:04:43.171 }, 00:04:43.171 { 00:04:43.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.171 "dma_device_type": 2 00:04:43.171 } 00:04:43.171 ], 00:04:43.171 "driver_specific": {} 00:04:43.171 }, 00:04:43.171 { 00:04:43.171 "name": "Passthru0", 00:04:43.171 "aliases": [ 00:04:43.171 "b6489904-23f0-5945-9407-c11af6a36391" 00:04:43.171 ], 00:04:43.171 "product_name": "passthru", 00:04:43.171 "block_size": 512, 00:04:43.171 "num_blocks": 16384, 00:04:43.171 "uuid": "b6489904-23f0-5945-9407-c11af6a36391", 00:04:43.171 "assigned_rate_limits": { 00:04:43.171 "rw_ios_per_sec": 0, 00:04:43.171 "rw_mbytes_per_sec": 0, 00:04:43.171 "r_mbytes_per_sec": 0, 00:04:43.171 "w_mbytes_per_sec": 0 00:04:43.171 }, 00:04:43.171 "claimed": false, 00:04:43.171 "zoned": false, 00:04:43.171 "supported_io_types": { 00:04:43.171 "read": true, 00:04:43.171 "write": true, 00:04:43.171 "unmap": true, 00:04:43.171 "flush": true, 00:04:43.171 "reset": true, 00:04:43.171 "nvme_admin": false, 00:04:43.171 "nvme_io": false, 00:04:43.171 "nvme_io_md": false, 00:04:43.171 "write_zeroes": true, 00:04:43.171 "zcopy": true, 00:04:43.171 "get_zone_info": false, 00:04:43.171 "zone_management": false, 00:04:43.171 "zone_append": false, 00:04:43.171 "compare": false, 00:04:43.171 "compare_and_write": false, 00:04:43.171 "abort": true, 00:04:43.171 "seek_hole": false, 00:04:43.171 "seek_data": false, 00:04:43.171 "copy": true, 00:04:43.171 "nvme_iov_md": false 00:04:43.171 }, 00:04:43.171 "memory_domains": [ 00:04:43.171 { 00:04:43.171 "dma_device_id": "system", 00:04:43.171 "dma_device_type": 1 00:04:43.171 }, 00:04:43.171 { 00:04:43.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.171 "dma_device_type": 2 00:04:43.171 } 00:04:43.171 ], 00:04:43.171 "driver_specific": { 00:04:43.171 "passthru": { 00:04:43.171 "name": "Passthru0", 00:04:43.171 "base_bdev_name": "Malloc0" 00:04:43.171 } 00:04:43.171 } 00:04:43.171 } 00:04:43.171 ]' 00:04:43.171 19:06:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:43.171 19:06:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:43.171 19:06:27 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:43.171 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.171 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.171 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.171 19:06:27 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:43.171 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.171 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.171 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.171 19:06:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:43.171 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.171 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.171 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.171 19:06:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:43.171 19:06:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:43.171 ************************************ 00:04:43.171 END TEST rpc_integrity 00:04:43.171 ************************************ 00:04:43.171 19:06:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:43.171 00:04:43.171 real 0m0.242s 00:04:43.171 user 0m0.131s 00:04:43.171 sys 0m0.036s 00:04:43.171 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.171 19:06:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.429 19:06:27 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:43.429 19:06:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.429 19:06:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.429 19:06:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.429 ************************************ 00:04:43.429 START TEST rpc_plugins 00:04:43.429 ************************************ 00:04:43.429 19:06:27 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:43.429 19:06:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:43.429 19:06:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.429 19:06:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.429 19:06:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.429 19:06:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:43.429 19:06:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:43.429 19:06:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.429 19:06:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.429 19:06:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.429 19:06:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:43.429 { 00:04:43.429 "name": "Malloc1", 00:04:43.429 "aliases": [ 00:04:43.429 "1ecbdf7e-d83a-47c7-838c-07f7163d110f" 00:04:43.429 ], 00:04:43.429 "product_name": "Malloc disk", 00:04:43.429 "block_size": 4096, 00:04:43.429 "num_blocks": 256, 00:04:43.430 "uuid": "1ecbdf7e-d83a-47c7-838c-07f7163d110f", 00:04:43.430 "assigned_rate_limits": { 00:04:43.430 "rw_ios_per_sec": 0, 00:04:43.430 "rw_mbytes_per_sec": 0, 00:04:43.430 "r_mbytes_per_sec": 0, 00:04:43.430 "w_mbytes_per_sec": 0 00:04:43.430 }, 00:04:43.430 "claimed": false, 00:04:43.430 "zoned": false, 00:04:43.430 "supported_io_types": { 00:04:43.430 "read": true, 00:04:43.430 "write": true, 00:04:43.430 "unmap": true, 00:04:43.430 "flush": true, 00:04:43.430 "reset": true, 00:04:43.430 "nvme_admin": false, 00:04:43.430 "nvme_io": false, 00:04:43.430 "nvme_io_md": false, 00:04:43.430 "write_zeroes": true, 00:04:43.430 "zcopy": true, 00:04:43.430 "get_zone_info": false, 00:04:43.430 "zone_management": false, 00:04:43.430 "zone_append": false, 00:04:43.430 "compare": false, 00:04:43.430 "compare_and_write": false, 00:04:43.430 "abort": true, 00:04:43.430 "seek_hole": false, 00:04:43.430 "seek_data": false, 00:04:43.430 "copy": true, 00:04:43.430 "nvme_iov_md": false 00:04:43.430 }, 00:04:43.430 "memory_domains": [ 00:04:43.430 { 00:04:43.430 "dma_device_id": "system", 00:04:43.430 "dma_device_type": 1 00:04:43.430 }, 00:04:43.430 { 00:04:43.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.430 "dma_device_type": 2 00:04:43.430 } 00:04:43.430 ], 00:04:43.430 "driver_specific": {} 00:04:43.430 } 00:04:43.430 ]' 00:04:43.430 19:06:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:43.430 19:06:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:43.430 19:06:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:43.430 19:06:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.430 19:06:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.430 19:06:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.430 19:06:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:43.430 19:06:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.430 19:06:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.430 19:06:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.430 19:06:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:43.430 19:06:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:43.430 ************************************ 00:04:43.430 END TEST rpc_plugins 00:04:43.430 ************************************ 00:04:43.430 19:06:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:43.430 00:04:43.430 real 0m0.104s 00:04:43.430 user 0m0.058s 00:04:43.430 sys 0m0.016s 00:04:43.430 19:06:27 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.430 19:06:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.430 19:06:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:43.430 19:06:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.430 19:06:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.430 19:06:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.430 ************************************ 00:04:43.430 START TEST rpc_trace_cmd_test 00:04:43.430 ************************************ 00:04:43.430 19:06:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:43.430 19:06:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:43.430 19:06:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:43.430 19:06:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.430 19:06:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:43.430 19:06:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.430 19:06:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:43.430 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58980", 00:04:43.430 "tpoint_group_mask": "0x8", 00:04:43.430 "iscsi_conn": { 00:04:43.430 "mask": "0x2", 00:04:43.430 "tpoint_mask": "0x0" 00:04:43.430 }, 00:04:43.430 "scsi": { 00:04:43.430 "mask": "0x4", 00:04:43.430 "tpoint_mask": "0x0" 00:04:43.430 }, 00:04:43.430 "bdev": { 00:04:43.430 "mask": "0x8", 00:04:43.430 "tpoint_mask": "0xffffffffffffffff" 00:04:43.430 }, 00:04:43.430 "nvmf_rdma": { 00:04:43.430 "mask": "0x10", 00:04:43.430 "tpoint_mask": "0x0" 00:04:43.430 }, 00:04:43.430 "nvmf_tcp": { 00:04:43.430 "mask": "0x20", 00:04:43.430 "tpoint_mask": "0x0" 00:04:43.430 }, 00:04:43.430 "ftl": { 00:04:43.430 "mask": "0x40", 00:04:43.430 "tpoint_mask": "0x0" 00:04:43.430 }, 00:04:43.430 "blobfs": { 00:04:43.430 "mask": "0x80", 00:04:43.430 "tpoint_mask": "0x0" 00:04:43.430 }, 00:04:43.430 "dsa": { 00:04:43.430 "mask": "0x200", 00:04:43.430 "tpoint_mask": "0x0" 00:04:43.430 }, 00:04:43.430 "thread": { 00:04:43.430 "mask": "0x400", 00:04:43.430 "tpoint_mask": "0x0" 00:04:43.430 }, 00:04:43.430 "nvme_pcie": { 00:04:43.430 "mask": "0x800", 00:04:43.430 "tpoint_mask": "0x0" 00:04:43.430 }, 00:04:43.430 "iaa": { 00:04:43.430 "mask": "0x1000", 00:04:43.430 "tpoint_mask": "0x0" 00:04:43.430 }, 00:04:43.430 "nvme_tcp": { 00:04:43.430 "mask": "0x2000", 00:04:43.430 "tpoint_mask": "0x0" 00:04:43.430 }, 00:04:43.430 "bdev_nvme": { 00:04:43.430 "mask": "0x4000", 00:04:43.430 "tpoint_mask": "0x0" 00:04:43.430 }, 00:04:43.430 "sock": { 00:04:43.430 "mask": "0x8000", 00:04:43.430 "tpoint_mask": "0x0" 00:04:43.430 }, 00:04:43.430 "blob": { 00:04:43.430 "mask": "0x10000", 00:04:43.430 "tpoint_mask": "0x0" 00:04:43.430 }, 00:04:43.430 "bdev_raid": { 00:04:43.430 "mask": "0x20000", 00:04:43.430 "tpoint_mask": "0x0" 00:04:43.430 }, 00:04:43.430 "scheduler": { 00:04:43.430 "mask": "0x40000", 00:04:43.430 "tpoint_mask": "0x0" 00:04:43.430 } 00:04:43.430 }' 00:04:43.430 19:06:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:43.430 19:06:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:43.430 19:06:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:43.430 19:06:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:43.430 19:06:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:43.688 19:06:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:43.688 19:06:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:43.688 19:06:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:43.688 19:06:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:43.688 ************************************ 00:04:43.688 END TEST rpc_trace_cmd_test 00:04:43.688 ************************************ 00:04:43.688 19:06:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:43.688 00:04:43.688 real 0m0.160s 00:04:43.688 user 0m0.129s 00:04:43.688 sys 0m0.020s 00:04:43.688 19:06:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.688 19:06:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:43.688 19:06:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:43.688 19:06:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:43.688 19:06:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:43.688 19:06:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.688 19:06:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.689 19:06:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.689 ************************************ 00:04:43.689 START TEST rpc_daemon_integrity 00:04:43.689 ************************************ 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:43.689 { 00:04:43.689 "name": "Malloc2", 00:04:43.689 "aliases": [ 00:04:43.689 "469c625c-a01f-4c57-a714-f6709345d1f4" 00:04:43.689 ], 00:04:43.689 "product_name": "Malloc disk", 00:04:43.689 "block_size": 512, 00:04:43.689 "num_blocks": 16384, 00:04:43.689 "uuid": "469c625c-a01f-4c57-a714-f6709345d1f4", 00:04:43.689 "assigned_rate_limits": { 00:04:43.689 "rw_ios_per_sec": 0, 00:04:43.689 "rw_mbytes_per_sec": 0, 00:04:43.689 "r_mbytes_per_sec": 0, 00:04:43.689 "w_mbytes_per_sec": 0 00:04:43.689 }, 00:04:43.689 "claimed": false, 00:04:43.689 "zoned": false, 00:04:43.689 "supported_io_types": { 00:04:43.689 "read": true, 00:04:43.689 "write": true, 00:04:43.689 "unmap": true, 00:04:43.689 "flush": true, 00:04:43.689 "reset": true, 00:04:43.689 "nvme_admin": false, 00:04:43.689 "nvme_io": false, 00:04:43.689 "nvme_io_md": false, 00:04:43.689 "write_zeroes": true, 00:04:43.689 "zcopy": true, 00:04:43.689 "get_zone_info": false, 00:04:43.689 "zone_management": false, 00:04:43.689 "zone_append": false, 00:04:43.689 "compare": false, 00:04:43.689 "compare_and_write": false, 00:04:43.689 "abort": true, 00:04:43.689 "seek_hole": false, 00:04:43.689 "seek_data": false, 00:04:43.689 "copy": true, 00:04:43.689 "nvme_iov_md": false 00:04:43.689 }, 00:04:43.689 "memory_domains": [ 00:04:43.689 { 00:04:43.689 "dma_device_id": "system", 00:04:43.689 "dma_device_type": 1 00:04:43.689 }, 00:04:43.689 { 00:04:43.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.689 "dma_device_type": 2 00:04:43.689 } 00:04:43.689 ], 00:04:43.689 "driver_specific": {} 00:04:43.689 } 00:04:43.689 ]' 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.689 [2024-12-16 19:06:27.979911] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:43.689 [2024-12-16 19:06:27.979957] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:43.689 [2024-12-16 19:06:27.979974] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:43.689 [2024-12-16 19:06:27.979983] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:43.689 [2024-12-16 19:06:27.981679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:43.689 [2024-12-16 19:06:27.981712] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:43.689 Passthru0 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.689 19:06:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.689 19:06:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.689 19:06:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:43.689 { 00:04:43.689 "name": "Malloc2", 00:04:43.689 "aliases": [ 00:04:43.689 "469c625c-a01f-4c57-a714-f6709345d1f4" 00:04:43.689 ], 00:04:43.689 "product_name": "Malloc disk", 00:04:43.689 "block_size": 512, 00:04:43.689 "num_blocks": 16384, 00:04:43.689 "uuid": "469c625c-a01f-4c57-a714-f6709345d1f4", 00:04:43.689 "assigned_rate_limits": { 00:04:43.689 "rw_ios_per_sec": 0, 00:04:43.689 "rw_mbytes_per_sec": 0, 00:04:43.689 "r_mbytes_per_sec": 0, 00:04:43.689 "w_mbytes_per_sec": 0 00:04:43.689 }, 00:04:43.689 "claimed": true, 00:04:43.689 "claim_type": "exclusive_write", 00:04:43.689 "zoned": false, 00:04:43.689 "supported_io_types": { 00:04:43.689 "read": true, 00:04:43.689 "write": true, 00:04:43.689 "unmap": true, 00:04:43.689 "flush": true, 00:04:43.689 "reset": true, 00:04:43.689 "nvme_admin": false, 00:04:43.689 "nvme_io": false, 00:04:43.689 "nvme_io_md": false, 00:04:43.689 "write_zeroes": true, 00:04:43.689 "zcopy": true, 00:04:43.689 "get_zone_info": false, 00:04:43.689 "zone_management": false, 00:04:43.689 "zone_append": false, 00:04:43.689 "compare": false, 00:04:43.689 "compare_and_write": false, 00:04:43.689 "abort": true, 00:04:43.689 "seek_hole": false, 00:04:43.689 "seek_data": false, 00:04:43.689 "copy": true, 00:04:43.689 "nvme_iov_md": false 00:04:43.689 }, 00:04:43.689 "memory_domains": [ 00:04:43.689 { 00:04:43.689 "dma_device_id": "system", 00:04:43.689 "dma_device_type": 1 00:04:43.689 }, 00:04:43.689 { 00:04:43.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.689 "dma_device_type": 2 00:04:43.689 } 00:04:43.689 ], 00:04:43.689 "driver_specific": {} 00:04:43.689 }, 00:04:43.689 { 00:04:43.689 "name": "Passthru0", 00:04:43.689 "aliases": [ 00:04:43.689 "002f780e-6cc1-5a9b-8dbd-c0c559f846ae" 00:04:43.689 ], 00:04:43.689 "product_name": "passthru", 00:04:43.689 "block_size": 512, 00:04:43.689 "num_blocks": 16384, 00:04:43.689 "uuid": "002f780e-6cc1-5a9b-8dbd-c0c559f846ae", 00:04:43.689 "assigned_rate_limits": { 00:04:43.689 "rw_ios_per_sec": 0, 00:04:43.689 "rw_mbytes_per_sec": 0, 00:04:43.689 "r_mbytes_per_sec": 0, 00:04:43.689 "w_mbytes_per_sec": 0 00:04:43.689 }, 00:04:43.689 "claimed": false, 00:04:43.689 "zoned": false, 00:04:43.689 "supported_io_types": { 00:04:43.689 "read": true, 00:04:43.689 "write": true, 00:04:43.689 "unmap": true, 00:04:43.689 "flush": true, 00:04:43.689 "reset": true, 00:04:43.689 "nvme_admin": false, 00:04:43.689 "nvme_io": false, 00:04:43.689 "nvme_io_md": false, 00:04:43.689 "write_zeroes": true, 00:04:43.689 "zcopy": true, 00:04:43.689 "get_zone_info": false, 00:04:43.689 "zone_management": false, 00:04:43.689 "zone_append": false, 00:04:43.689 "compare": false, 00:04:43.689 "compare_and_write": false, 00:04:43.689 "abort": true, 00:04:43.689 "seek_hole": false, 00:04:43.689 "seek_data": false, 00:04:43.689 "copy": true, 00:04:43.689 "nvme_iov_md": false 00:04:43.689 }, 00:04:43.689 "memory_domains": [ 00:04:43.689 { 00:04:43.689 "dma_device_id": "system", 00:04:43.689 "dma_device_type": 1 00:04:43.689 }, 00:04:43.689 { 00:04:43.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.689 "dma_device_type": 2 00:04:43.689 } 00:04:43.689 ], 00:04:43.689 "driver_specific": { 00:04:43.689 "passthru": { 00:04:43.689 "name": "Passthru0", 00:04:43.689 "base_bdev_name": "Malloc2" 00:04:43.689 } 00:04:43.689 } 00:04:43.689 } 00:04:43.689 ]' 00:04:43.689 19:06:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:43.689 19:06:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:43.689 19:06:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:43.689 19:06:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.689 19:06:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.947 19:06:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.947 19:06:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:43.947 19:06:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.947 19:06:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.947 19:06:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.948 19:06:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:43.948 19:06:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.948 19:06:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.948 19:06:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.948 19:06:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:43.948 19:06:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:43.948 ************************************ 00:04:43.948 END TEST rpc_daemon_integrity 00:04:43.948 ************************************ 00:04:43.948 19:06:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:43.948 00:04:43.948 real 0m0.229s 00:04:43.948 user 0m0.132s 00:04:43.948 sys 0m0.030s 00:04:43.948 19:06:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.948 19:06:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.948 19:06:28 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:43.948 19:06:28 rpc -- rpc/rpc.sh@84 -- # killprocess 58980 00:04:43.948 19:06:28 rpc -- common/autotest_common.sh@954 -- # '[' -z 58980 ']' 00:04:43.948 19:06:28 rpc -- common/autotest_common.sh@958 -- # kill -0 58980 00:04:43.948 19:06:28 rpc -- common/autotest_common.sh@959 -- # uname 00:04:43.948 19:06:28 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.948 19:06:28 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58980 00:04:43.948 killing process with pid 58980 00:04:43.948 19:06:28 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.948 19:06:28 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.948 19:06:28 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58980' 00:04:43.948 19:06:28 rpc -- common/autotest_common.sh@973 -- # kill 58980 00:04:43.948 19:06:28 rpc -- common/autotest_common.sh@978 -- # wait 58980 00:04:45.321 00:04:45.321 real 0m3.134s 00:04:45.321 user 0m3.584s 00:04:45.321 sys 0m0.555s 00:04:45.321 19:06:29 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.321 ************************************ 00:04:45.321 END TEST rpc 00:04:45.321 ************************************ 00:04:45.321 19:06:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.321 19:06:29 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:45.321 19:06:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.321 19:06:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.321 19:06:29 -- common/autotest_common.sh@10 -- # set +x 00:04:45.321 ************************************ 00:04:45.321 START TEST skip_rpc 00:04:45.321 ************************************ 00:04:45.321 19:06:29 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:45.321 * Looking for test storage... 00:04:45.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:45.321 19:06:29 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:45.321 19:06:29 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:45.321 19:06:29 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:45.321 19:06:29 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.321 19:06:29 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:45.322 19:06:29 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.322 19:06:29 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:45.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.322 --rc genhtml_branch_coverage=1 00:04:45.322 --rc genhtml_function_coverage=1 00:04:45.322 --rc genhtml_legend=1 00:04:45.322 --rc geninfo_all_blocks=1 00:04:45.322 --rc geninfo_unexecuted_blocks=1 00:04:45.322 00:04:45.322 ' 00:04:45.322 19:06:29 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:45.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.322 --rc genhtml_branch_coverage=1 00:04:45.322 --rc genhtml_function_coverage=1 00:04:45.322 --rc genhtml_legend=1 00:04:45.322 --rc geninfo_all_blocks=1 00:04:45.322 --rc geninfo_unexecuted_blocks=1 00:04:45.322 00:04:45.322 ' 00:04:45.322 19:06:29 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:45.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.322 --rc genhtml_branch_coverage=1 00:04:45.322 --rc genhtml_function_coverage=1 00:04:45.322 --rc genhtml_legend=1 00:04:45.322 --rc geninfo_all_blocks=1 00:04:45.322 --rc geninfo_unexecuted_blocks=1 00:04:45.322 00:04:45.322 ' 00:04:45.322 19:06:29 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:45.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.322 --rc genhtml_branch_coverage=1 00:04:45.322 --rc genhtml_function_coverage=1 00:04:45.322 --rc genhtml_legend=1 00:04:45.322 --rc geninfo_all_blocks=1 00:04:45.322 --rc geninfo_unexecuted_blocks=1 00:04:45.322 00:04:45.322 ' 00:04:45.322 19:06:29 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:45.322 19:06:29 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:45.322 19:06:29 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:45.322 19:06:29 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.322 19:06:29 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.322 19:06:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.322 ************************************ 00:04:45.322 START TEST skip_rpc 00:04:45.322 ************************************ 00:04:45.322 19:06:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:45.322 19:06:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59187 00:04:45.322 19:06:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.322 19:06:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:45.322 19:06:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:45.322 [2024-12-16 19:06:29.596702] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:45.322 [2024-12-16 19:06:29.596823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59187 ] 00:04:45.581 [2024-12-16 19:06:29.752459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.581 [2024-12-16 19:06:29.833717] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59187 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 59187 ']' 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 59187 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59187 00:04:50.868 killing process with pid 59187 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59187' 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 59187 00:04:50.868 19:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 59187 00:04:51.433 00:04:51.433 real 0m6.203s 00:04:51.433 user 0m5.844s 00:04:51.433 sys 0m0.257s 00:04:51.433 19:06:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.433 19:06:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.433 ************************************ 00:04:51.433 END TEST skip_rpc 00:04:51.433 ************************************ 00:04:51.433 19:06:35 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:51.433 19:06:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.433 19:06:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.433 19:06:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.433 ************************************ 00:04:51.433 START TEST skip_rpc_with_json 00:04:51.433 ************************************ 00:04:51.433 19:06:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:51.433 19:06:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:51.433 19:06:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59280 00:04:51.433 19:06:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.433 19:06:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59280 00:04:51.433 19:06:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 59280 ']' 00:04:51.433 19:06:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.433 19:06:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.433 19:06:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.433 19:06:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.433 19:06:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.433 19:06:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.690 [2024-12-16 19:06:35.840374] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:51.690 [2024-12-16 19:06:35.840618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59280 ] 00:04:51.690 [2024-12-16 19:06:35.999768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.948 [2024-12-16 19:06:36.096359] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.513 19:06:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.513 19:06:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:52.513 19:06:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:52.513 19:06:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.513 19:06:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.513 [2024-12-16 19:06:36.686066] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:52.513 request: 00:04:52.513 { 00:04:52.513 "trtype": "tcp", 00:04:52.513 "method": "nvmf_get_transports", 00:04:52.513 "req_id": 1 00:04:52.513 } 00:04:52.513 Got JSON-RPC error response 00:04:52.513 response: 00:04:52.513 { 00:04:52.513 "code": -19, 00:04:52.513 "message": "No such device" 00:04:52.513 } 00:04:52.513 19:06:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:52.513 19:06:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:52.513 19:06:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.513 19:06:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.513 [2024-12-16 19:06:36.698167] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:52.513 19:06:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.513 19:06:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:52.513 19:06:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.513 19:06:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.513 19:06:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.513 19:06:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:52.513 { 00:04:52.513 "subsystems": [ 00:04:52.513 { 00:04:52.513 "subsystem": "fsdev", 00:04:52.513 "config": [ 00:04:52.513 { 00:04:52.513 "method": "fsdev_set_opts", 00:04:52.513 "params": { 00:04:52.513 "fsdev_io_pool_size": 65535, 00:04:52.513 "fsdev_io_cache_size": 256 00:04:52.513 } 00:04:52.513 } 00:04:52.513 ] 00:04:52.513 }, 00:04:52.513 { 00:04:52.513 "subsystem": "keyring", 00:04:52.513 "config": [] 00:04:52.513 }, 00:04:52.513 { 00:04:52.513 "subsystem": "iobuf", 00:04:52.513 "config": [ 00:04:52.513 { 00:04:52.513 "method": "iobuf_set_options", 00:04:52.513 "params": { 00:04:52.513 "small_pool_count": 8192, 00:04:52.513 "large_pool_count": 1024, 00:04:52.513 "small_bufsize": 8192, 00:04:52.513 "large_bufsize": 135168, 00:04:52.513 "enable_numa": false 00:04:52.513 } 00:04:52.514 } 00:04:52.514 ] 00:04:52.514 }, 00:04:52.514 { 00:04:52.514 "subsystem": "sock", 00:04:52.514 "config": [ 00:04:52.514 { 00:04:52.514 "method": "sock_set_default_impl", 00:04:52.514 "params": { 00:04:52.514 "impl_name": "posix" 00:04:52.514 } 00:04:52.514 }, 00:04:52.514 { 00:04:52.514 "method": "sock_impl_set_options", 00:04:52.514 "params": { 00:04:52.514 "impl_name": "ssl", 00:04:52.514 "recv_buf_size": 4096, 00:04:52.514 "send_buf_size": 4096, 00:04:52.514 "enable_recv_pipe": true, 00:04:52.514 "enable_quickack": false, 00:04:52.514 "enable_placement_id": 0, 00:04:52.514 "enable_zerocopy_send_server": true, 00:04:52.514 "enable_zerocopy_send_client": false, 00:04:52.514 "zerocopy_threshold": 0, 00:04:52.514 "tls_version": 0, 00:04:52.514 "enable_ktls": false 00:04:52.514 } 00:04:52.514 }, 00:04:52.514 { 00:04:52.514 "method": "sock_impl_set_options", 00:04:52.514 "params": { 00:04:52.514 "impl_name": "posix", 00:04:52.514 "recv_buf_size": 2097152, 00:04:52.514 "send_buf_size": 2097152, 00:04:52.514 "enable_recv_pipe": true, 00:04:52.514 "enable_quickack": false, 00:04:52.514 "enable_placement_id": 0, 00:04:52.514 "enable_zerocopy_send_server": true, 00:04:52.514 "enable_zerocopy_send_client": false, 00:04:52.514 "zerocopy_threshold": 0, 00:04:52.514 "tls_version": 0, 00:04:52.514 "enable_ktls": false 00:04:52.514 } 00:04:52.514 } 00:04:52.514 ] 00:04:52.514 }, 00:04:52.514 { 00:04:52.514 "subsystem": "vmd", 00:04:52.514 "config": [] 00:04:52.514 }, 00:04:52.514 { 00:04:52.514 "subsystem": "accel", 00:04:52.514 "config": [ 00:04:52.514 { 00:04:52.514 "method": "accel_set_options", 00:04:52.514 "params": { 00:04:52.514 "small_cache_size": 128, 00:04:52.514 "large_cache_size": 16, 00:04:52.514 "task_count": 2048, 00:04:52.514 "sequence_count": 2048, 00:04:52.514 "buf_count": 2048 00:04:52.514 } 00:04:52.514 } 00:04:52.514 ] 00:04:52.514 }, 00:04:52.514 { 00:04:52.514 "subsystem": "bdev", 00:04:52.514 "config": [ 00:04:52.514 { 00:04:52.514 "method": "bdev_set_options", 00:04:52.514 "params": { 00:04:52.514 "bdev_io_pool_size": 65535, 00:04:52.514 "bdev_io_cache_size": 256, 00:04:52.514 "bdev_auto_examine": true, 00:04:52.514 "iobuf_small_cache_size": 128, 00:04:52.514 "iobuf_large_cache_size": 16 00:04:52.514 } 00:04:52.514 }, 00:04:52.514 { 00:04:52.514 "method": "bdev_raid_set_options", 00:04:52.514 "params": { 00:04:52.514 "process_window_size_kb": 1024, 00:04:52.514 "process_max_bandwidth_mb_sec": 0 00:04:52.514 } 00:04:52.514 }, 00:04:52.514 { 00:04:52.514 "method": "bdev_iscsi_set_options", 00:04:52.514 "params": { 00:04:52.514 "timeout_sec": 30 00:04:52.514 } 00:04:52.514 }, 00:04:52.514 { 00:04:52.514 "method": "bdev_nvme_set_options", 00:04:52.514 "params": { 00:04:52.514 "action_on_timeout": "none", 00:04:52.514 "timeout_us": 0, 00:04:52.514 "timeout_admin_us": 0, 00:04:52.514 "keep_alive_timeout_ms": 10000, 00:04:52.514 "arbitration_burst": 0, 00:04:52.514 "low_priority_weight": 0, 00:04:52.514 "medium_priority_weight": 0, 00:04:52.514 "high_priority_weight": 0, 00:04:52.514 "nvme_adminq_poll_period_us": 10000, 00:04:52.514 "nvme_ioq_poll_period_us": 0, 00:04:52.514 "io_queue_requests": 0, 00:04:52.514 "delay_cmd_submit": true, 00:04:52.514 "transport_retry_count": 4, 00:04:52.514 "bdev_retry_count": 3, 00:04:52.514 "transport_ack_timeout": 0, 00:04:52.514 "ctrlr_loss_timeout_sec": 0, 00:04:52.514 "reconnect_delay_sec": 0, 00:04:52.514 "fast_io_fail_timeout_sec": 0, 00:04:52.514 "disable_auto_failback": false, 00:04:52.514 "generate_uuids": false, 00:04:52.514 "transport_tos": 0, 00:04:52.514 "nvme_error_stat": false, 00:04:52.514 "rdma_srq_size": 0, 00:04:52.514 "io_path_stat": false, 00:04:52.514 "allow_accel_sequence": false, 00:04:52.514 "rdma_max_cq_size": 0, 00:04:52.514 "rdma_cm_event_timeout_ms": 0, 00:04:52.514 "dhchap_digests": [ 00:04:52.514 "sha256", 00:04:52.514 "sha384", 00:04:52.514 "sha512" 00:04:52.514 ], 00:04:52.514 "dhchap_dhgroups": [ 00:04:52.514 "null", 00:04:52.514 "ffdhe2048", 00:04:52.514 "ffdhe3072", 00:04:52.514 "ffdhe4096", 00:04:52.514 "ffdhe6144", 00:04:52.514 "ffdhe8192" 00:04:52.514 ], 00:04:52.514 "rdma_umr_per_io": false 00:04:52.514 } 00:04:52.514 }, 00:04:52.514 { 00:04:52.514 "method": "bdev_nvme_set_hotplug", 00:04:52.514 "params": { 00:04:52.514 "period_us": 100000, 00:04:52.514 "enable": false 00:04:52.514 } 00:04:52.514 }, 00:04:52.514 { 00:04:52.514 "method": "bdev_wait_for_examine" 00:04:52.514 } 00:04:52.514 ] 00:04:52.514 }, 00:04:52.514 { 00:04:52.514 "subsystem": "scsi", 00:04:52.514 "config": null 00:04:52.514 }, 00:04:52.514 { 00:04:52.514 "subsystem": "scheduler", 00:04:52.514 "config": [ 00:04:52.514 { 00:04:52.514 "method": "framework_set_scheduler", 00:04:52.514 "params": { 00:04:52.514 "name": "static" 00:04:52.514 } 00:04:52.514 } 00:04:52.514 ] 00:04:52.514 }, 00:04:52.514 { 00:04:52.514 "subsystem": "vhost_scsi", 00:04:52.514 "config": [] 00:04:52.514 }, 00:04:52.514 { 00:04:52.514 "subsystem": "vhost_blk", 00:04:52.514 "config": [] 00:04:52.514 }, 00:04:52.514 { 00:04:52.514 "subsystem": "ublk", 00:04:52.514 "config": [] 00:04:52.514 }, 00:04:52.514 { 00:04:52.514 "subsystem": "nbd", 00:04:52.514 "config": [] 00:04:52.514 }, 00:04:52.514 { 00:04:52.514 "subsystem": "nvmf", 00:04:52.514 "config": [ 00:04:52.514 { 00:04:52.514 "method": "nvmf_set_config", 00:04:52.514 "params": { 00:04:52.514 "discovery_filter": "match_any", 00:04:52.514 "admin_cmd_passthru": { 00:04:52.514 "identify_ctrlr": false 00:04:52.514 }, 00:04:52.514 "dhchap_digests": [ 00:04:52.514 "sha256", 00:04:52.514 "sha384", 00:04:52.514 "sha512" 00:04:52.514 ], 00:04:52.514 "dhchap_dhgroups": [ 00:04:52.514 "null", 00:04:52.514 "ffdhe2048", 00:04:52.514 "ffdhe3072", 00:04:52.514 "ffdhe4096", 00:04:52.514 "ffdhe6144", 00:04:52.514 "ffdhe8192" 00:04:52.514 ] 00:04:52.514 } 00:04:52.514 }, 00:04:52.514 { 00:04:52.514 "method": "nvmf_set_max_subsystems", 00:04:52.514 "params": { 00:04:52.514 "max_subsystems": 1024 00:04:52.514 } 00:04:52.514 }, 00:04:52.514 { 00:04:52.514 "method": "nvmf_set_crdt", 00:04:52.514 "params": { 00:04:52.514 "crdt1": 0, 00:04:52.514 "crdt2": 0, 00:04:52.514 "crdt3": 0 00:04:52.514 } 00:04:52.514 }, 00:04:52.514 { 00:04:52.514 "method": "nvmf_create_transport", 00:04:52.514 "params": { 00:04:52.514 "trtype": "TCP", 00:04:52.514 "max_queue_depth": 128, 00:04:52.514 "max_io_qpairs_per_ctrlr": 127, 00:04:52.514 "in_capsule_data_size": 4096, 00:04:52.514 "max_io_size": 131072, 00:04:52.514 "io_unit_size": 131072, 00:04:52.514 "max_aq_depth": 128, 00:04:52.514 "num_shared_buffers": 511, 00:04:52.514 "buf_cache_size": 4294967295, 00:04:52.514 "dif_insert_or_strip": false, 00:04:52.514 "zcopy": false, 00:04:52.514 "c2h_success": true, 00:04:52.514 "sock_priority": 0, 00:04:52.514 "abort_timeout_sec": 1, 00:04:52.514 "ack_timeout": 0, 00:04:52.514 "data_wr_pool_size": 0 00:04:52.514 } 00:04:52.514 } 00:04:52.514 ] 00:04:52.514 }, 00:04:52.514 { 00:04:52.514 "subsystem": "iscsi", 00:04:52.514 "config": [ 00:04:52.514 { 00:04:52.514 "method": "iscsi_set_options", 00:04:52.514 "params": { 00:04:52.514 "node_base": "iqn.2016-06.io.spdk", 00:04:52.514 "max_sessions": 128, 00:04:52.514 "max_connections_per_session": 2, 00:04:52.514 "max_queue_depth": 64, 00:04:52.514 "default_time2wait": 2, 00:04:52.514 "default_time2retain": 20, 00:04:52.514 "first_burst_length": 8192, 00:04:52.514 "immediate_data": true, 00:04:52.514 "allow_duplicated_isid": false, 00:04:52.514 "error_recovery_level": 0, 00:04:52.514 "nop_timeout": 60, 00:04:52.514 "nop_in_interval": 30, 00:04:52.514 "disable_chap": false, 00:04:52.514 "require_chap": false, 00:04:52.514 "mutual_chap": false, 00:04:52.514 "chap_group": 0, 00:04:52.514 "max_large_datain_per_connection": 64, 00:04:52.514 "max_r2t_per_connection": 4, 00:04:52.514 "pdu_pool_size": 36864, 00:04:52.514 "immediate_data_pool_size": 16384, 00:04:52.514 "data_out_pool_size": 2048 00:04:52.514 } 00:04:52.514 } 00:04:52.514 ] 00:04:52.514 } 00:04:52.514 ] 00:04:52.514 } 00:04:52.514 19:06:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:52.514 19:06:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59280 00:04:52.514 19:06:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59280 ']' 00:04:52.514 19:06:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59280 00:04:52.514 19:06:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:52.514 19:06:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.514 19:06:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59280 00:04:52.772 killing process with pid 59280 00:04:52.772 19:06:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.772 19:06:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.772 19:06:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59280' 00:04:52.772 19:06:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59280 00:04:52.772 19:06:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59280 00:04:54.145 19:06:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59320 00:04:54.145 19:06:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:54.145 19:06:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:59.412 19:06:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59320 00:04:59.412 19:06:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59320 ']' 00:04:59.412 19:06:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59320 00:04:59.412 19:06:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:59.412 19:06:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.412 19:06:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59320 00:04:59.412 killing process with pid 59320 00:04:59.412 19:06:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.412 19:06:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.412 19:06:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59320' 00:04:59.412 19:06:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59320 00:04:59.412 19:06:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59320 00:05:00.361 19:06:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:00.361 19:06:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:00.361 ************************************ 00:05:00.361 END TEST skip_rpc_with_json 00:05:00.361 ************************************ 00:05:00.361 00:05:00.361 real 0m8.637s 00:05:00.361 user 0m8.258s 00:05:00.361 sys 0m0.607s 00:05:00.361 19:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.361 19:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.361 19:06:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:00.361 19:06:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.361 19:06:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.361 19:06:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.361 ************************************ 00:05:00.361 START TEST skip_rpc_with_delay 00:05:00.361 ************************************ 00:05:00.361 19:06:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:00.361 19:06:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.361 19:06:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:00.361 19:06:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.361 19:06:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.361 19:06:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.361 19:06:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.361 19:06:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.361 19:06:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.361 19:06:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.361 19:06:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.361 19:06:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:00.361 19:06:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.361 [2024-12-16 19:06:44.531892] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:00.361 19:06:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:00.361 19:06:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:00.361 19:06:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:00.361 19:06:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:00.361 00:05:00.362 real 0m0.133s 00:05:00.362 user 0m0.066s 00:05:00.362 sys 0m0.064s 00:05:00.362 19:06:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.362 19:06:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:00.362 ************************************ 00:05:00.362 END TEST skip_rpc_with_delay 00:05:00.362 ************************************ 00:05:00.362 19:06:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:00.362 19:06:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:00.362 19:06:44 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:00.362 19:06:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.362 19:06:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.362 19:06:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.362 ************************************ 00:05:00.362 START TEST exit_on_failed_rpc_init 00:05:00.362 ************************************ 00:05:00.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.362 19:06:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:00.362 19:06:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59442 00:05:00.362 19:06:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59442 00:05:00.362 19:06:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59442 ']' 00:05:00.362 19:06:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.362 19:06:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.362 19:06:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.362 19:06:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.362 19:06:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.362 19:06:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.362 [2024-12-16 19:06:44.698723] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:00.362 [2024-12-16 19:06:44.698845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59442 ] 00:05:00.636 [2024-12-16 19:06:44.859010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.636 [2024-12-16 19:06:44.956086] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.207 19:06:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.207 19:06:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:01.207 19:06:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.207 19:06:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:01.207 19:06:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:01.207 19:06:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:01.207 19:06:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.207 19:06:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.208 19:06:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.467 19:06:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.467 19:06:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.467 19:06:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.467 19:06:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.467 19:06:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:01.467 19:06:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:01.467 [2024-12-16 19:06:45.657453] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:01.467 [2024-12-16 19:06:45.657812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59455 ] 00:05:01.725 [2024-12-16 19:06:45.819762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.725 [2024-12-16 19:06:45.920189] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.725 [2024-12-16 19:06:45.920269] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:01.725 [2024-12-16 19:06:45.920282] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:01.725 [2024-12-16 19:06:45.920295] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:01.983 19:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:01.983 19:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:01.983 19:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:01.983 19:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:01.983 19:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:01.983 19:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:01.983 19:06:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:01.983 19:06:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59442 00:05:01.983 19:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59442 ']' 00:05:01.983 19:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59442 00:05:01.983 19:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:01.983 19:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.983 19:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59442 00:05:01.983 killing process with pid 59442 00:05:01.983 19:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.983 19:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.983 19:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59442' 00:05:01.983 19:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59442 00:05:01.983 19:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59442 00:05:03.356 ************************************ 00:05:03.356 END TEST exit_on_failed_rpc_init 00:05:03.356 ************************************ 00:05:03.356 00:05:03.356 real 0m2.965s 00:05:03.356 user 0m3.261s 00:05:03.356 sys 0m0.430s 00:05:03.356 19:06:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.356 19:06:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:03.356 19:06:47 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:03.356 00:05:03.356 real 0m18.257s 00:05:03.356 user 0m17.569s 00:05:03.356 sys 0m1.525s 00:05:03.356 19:06:47 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.356 ************************************ 00:05:03.356 END TEST skip_rpc 00:05:03.356 ************************************ 00:05:03.356 19:06:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.356 19:06:47 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:03.356 19:06:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.356 19:06:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.356 19:06:47 -- common/autotest_common.sh@10 -- # set +x 00:05:03.356 ************************************ 00:05:03.356 START TEST rpc_client 00:05:03.356 ************************************ 00:05:03.356 19:06:47 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:03.615 * Looking for test storage... 00:05:03.615 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:03.615 19:06:47 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.615 19:06:47 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.615 19:06:47 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.615 19:06:47 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.615 19:06:47 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:03.615 19:06:47 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.615 19:06:47 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.615 --rc genhtml_branch_coverage=1 00:05:03.615 --rc genhtml_function_coverage=1 00:05:03.615 --rc genhtml_legend=1 00:05:03.615 --rc geninfo_all_blocks=1 00:05:03.615 --rc geninfo_unexecuted_blocks=1 00:05:03.615 00:05:03.615 ' 00:05:03.615 19:06:47 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.615 --rc genhtml_branch_coverage=1 00:05:03.615 --rc genhtml_function_coverage=1 00:05:03.615 --rc genhtml_legend=1 00:05:03.615 --rc geninfo_all_blocks=1 00:05:03.615 --rc geninfo_unexecuted_blocks=1 00:05:03.615 00:05:03.615 ' 00:05:03.615 19:06:47 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.615 --rc genhtml_branch_coverage=1 00:05:03.615 --rc genhtml_function_coverage=1 00:05:03.615 --rc genhtml_legend=1 00:05:03.615 --rc geninfo_all_blocks=1 00:05:03.615 --rc geninfo_unexecuted_blocks=1 00:05:03.615 00:05:03.615 ' 00:05:03.615 19:06:47 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.615 --rc genhtml_branch_coverage=1 00:05:03.615 --rc genhtml_function_coverage=1 00:05:03.615 --rc genhtml_legend=1 00:05:03.615 --rc geninfo_all_blocks=1 00:05:03.615 --rc geninfo_unexecuted_blocks=1 00:05:03.615 00:05:03.615 ' 00:05:03.615 19:06:47 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:03.615 OK 00:05:03.615 19:06:47 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:03.616 00:05:03.616 real 0m0.181s 00:05:03.616 user 0m0.110s 00:05:03.616 sys 0m0.075s 00:05:03.616 19:06:47 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.616 19:06:47 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:03.616 ************************************ 00:05:03.616 END TEST rpc_client 00:05:03.616 ************************************ 00:05:03.616 19:06:47 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:03.616 19:06:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.616 19:06:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.616 19:06:47 -- common/autotest_common.sh@10 -- # set +x 00:05:03.616 ************************************ 00:05:03.616 START TEST json_config 00:05:03.616 ************************************ 00:05:03.616 19:06:47 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:03.616 19:06:47 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.616 19:06:47 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.616 19:06:47 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.878 19:06:47 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.878 19:06:47 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.878 19:06:47 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.878 19:06:47 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.878 19:06:47 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.878 19:06:47 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.878 19:06:47 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.878 19:06:47 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.878 19:06:47 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.878 19:06:47 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.878 19:06:47 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.878 19:06:47 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.878 19:06:47 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:03.878 19:06:47 json_config -- scripts/common.sh@345 -- # : 1 00:05:03.878 19:06:47 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.878 19:06:47 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.878 19:06:47 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:03.878 19:06:47 json_config -- scripts/common.sh@353 -- # local d=1 00:05:03.878 19:06:47 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.878 19:06:47 json_config -- scripts/common.sh@355 -- # echo 1 00:05:03.878 19:06:47 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.878 19:06:47 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:03.878 19:06:47 json_config -- scripts/common.sh@353 -- # local d=2 00:05:03.878 19:06:47 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.878 19:06:47 json_config -- scripts/common.sh@355 -- # echo 2 00:05:03.878 19:06:47 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.878 19:06:47 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.878 19:06:47 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.878 19:06:47 json_config -- scripts/common.sh@368 -- # return 0 00:05:03.878 19:06:47 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.878 19:06:47 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.878 --rc genhtml_branch_coverage=1 00:05:03.878 --rc genhtml_function_coverage=1 00:05:03.878 --rc genhtml_legend=1 00:05:03.878 --rc geninfo_all_blocks=1 00:05:03.878 --rc geninfo_unexecuted_blocks=1 00:05:03.878 00:05:03.878 ' 00:05:03.878 19:06:47 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.878 --rc genhtml_branch_coverage=1 00:05:03.878 --rc genhtml_function_coverage=1 00:05:03.878 --rc genhtml_legend=1 00:05:03.878 --rc geninfo_all_blocks=1 00:05:03.878 --rc geninfo_unexecuted_blocks=1 00:05:03.878 00:05:03.878 ' 00:05:03.878 19:06:47 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.878 --rc genhtml_branch_coverage=1 00:05:03.878 --rc genhtml_function_coverage=1 00:05:03.878 --rc genhtml_legend=1 00:05:03.878 --rc geninfo_all_blocks=1 00:05:03.878 --rc geninfo_unexecuted_blocks=1 00:05:03.878 00:05:03.878 ' 00:05:03.878 19:06:47 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.878 --rc genhtml_branch_coverage=1 00:05:03.878 --rc genhtml_function_coverage=1 00:05:03.878 --rc genhtml_legend=1 00:05:03.878 --rc geninfo_all_blocks=1 00:05:03.878 --rc geninfo_unexecuted_blocks=1 00:05:03.878 00:05:03.878 ' 00:05:03.878 19:06:47 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:03.878 19:06:47 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c41e311-20d4-414b-91c1-cda181937799 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=9c41e311-20d4-414b-91c1-cda181937799 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:03.878 19:06:48 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:03.878 19:06:48 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:03.878 19:06:48 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:03.878 19:06:48 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:03.878 19:06:48 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.878 19:06:48 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.878 19:06:48 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.878 19:06:48 json_config -- paths/export.sh@5 -- # export PATH 00:05:03.878 19:06:48 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@51 -- # : 0 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:03.878 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:03.878 19:06:48 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:03.878 19:06:48 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:03.878 19:06:48 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:03.878 19:06:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:03.878 19:06:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:03.878 19:06:48 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:03.878 19:06:48 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:03.878 WARNING: No tests are enabled so not running JSON configuration tests 00:05:03.878 19:06:48 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:03.878 00:05:03.878 real 0m0.141s 00:05:03.878 user 0m0.089s 00:05:03.878 sys 0m0.054s 00:05:03.878 19:06:48 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.878 19:06:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.878 ************************************ 00:05:03.878 END TEST json_config 00:05:03.878 ************************************ 00:05:03.878 19:06:48 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:03.878 19:06:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.878 19:06:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.878 19:06:48 -- common/autotest_common.sh@10 -- # set +x 00:05:03.878 ************************************ 00:05:03.878 START TEST json_config_extra_key 00:05:03.878 ************************************ 00:05:03.878 19:06:48 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:03.878 19:06:48 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.879 19:06:48 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.879 19:06:48 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.879 19:06:48 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:03.879 19:06:48 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.879 19:06:48 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.879 --rc genhtml_branch_coverage=1 00:05:03.879 --rc genhtml_function_coverage=1 00:05:03.879 --rc genhtml_legend=1 00:05:03.879 --rc geninfo_all_blocks=1 00:05:03.879 --rc geninfo_unexecuted_blocks=1 00:05:03.879 00:05:03.879 ' 00:05:03.879 19:06:48 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.879 --rc genhtml_branch_coverage=1 00:05:03.879 --rc genhtml_function_coverage=1 00:05:03.879 --rc genhtml_legend=1 00:05:03.879 --rc geninfo_all_blocks=1 00:05:03.879 --rc geninfo_unexecuted_blocks=1 00:05:03.879 00:05:03.879 ' 00:05:03.879 19:06:48 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.879 --rc genhtml_branch_coverage=1 00:05:03.879 --rc genhtml_function_coverage=1 00:05:03.879 --rc genhtml_legend=1 00:05:03.879 --rc geninfo_all_blocks=1 00:05:03.879 --rc geninfo_unexecuted_blocks=1 00:05:03.879 00:05:03.879 ' 00:05:03.879 19:06:48 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.879 --rc genhtml_branch_coverage=1 00:05:03.879 --rc genhtml_function_coverage=1 00:05:03.879 --rc genhtml_legend=1 00:05:03.879 --rc geninfo_all_blocks=1 00:05:03.879 --rc geninfo_unexecuted_blocks=1 00:05:03.879 00:05:03.879 ' 00:05:03.879 19:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c41e311-20d4-414b-91c1-cda181937799 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=9c41e311-20d4-414b-91c1-cda181937799 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:03.879 19:06:48 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:03.879 19:06:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.879 19:06:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.879 19:06:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.879 19:06:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:03.879 19:06:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:03.879 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:03.879 19:06:48 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:03.879 19:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:03.879 19:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:03.879 19:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:03.879 19:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:03.879 19:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:03.879 19:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:03.879 19:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:03.879 19:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:03.879 19:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:03.879 19:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:03.879 INFO: launching applications... 00:05:03.879 19:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:03.879 19:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:03.879 19:06:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:03.879 19:06:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:03.879 19:06:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:03.879 19:06:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:03.880 19:06:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:03.880 19:06:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.880 19:06:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.880 19:06:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59654 00:05:03.880 19:06:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:03.880 Waiting for target to run... 00:05:03.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:03.880 19:06:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59654 /var/tmp/spdk_tgt.sock 00:05:03.880 19:06:48 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59654 ']' 00:05:03.880 19:06:48 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:03.880 19:06:48 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.880 19:06:48 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:03.880 19:06:48 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.880 19:06:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:03.880 19:06:48 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:04.138 [2024-12-16 19:06:48.272857] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:04.138 [2024-12-16 19:06:48.273100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59654 ] 00:05:04.397 [2024-12-16 19:06:48.576198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.397 [2024-12-16 19:06:48.648885] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.963 19:06:49 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.963 19:06:49 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:04.963 19:06:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:04.963 00:05:04.963 19:06:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:04.963 INFO: shutting down applications... 00:05:04.963 19:06:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:04.963 19:06:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:04.963 19:06:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:04.963 19:06:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59654 ]] 00:05:04.963 19:06:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59654 00:05:04.963 19:06:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:04.963 19:06:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.963 19:06:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59654 00:05:04.963 19:06:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.529 19:06:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.529 19:06:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.529 19:06:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59654 00:05:05.529 19:06:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.787 19:06:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.787 19:06:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.787 19:06:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59654 00:05:05.787 19:06:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.356 19:06:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.356 19:06:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.356 19:06:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59654 00:05:06.356 SPDK target shutdown done 00:05:06.356 Success 00:05:06.356 19:06:50 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:06.356 19:06:50 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:06.356 19:06:50 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:06.356 19:06:50 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:06.356 19:06:50 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:06.356 00:05:06.356 real 0m2.569s 00:05:06.356 user 0m2.301s 00:05:06.356 sys 0m0.373s 00:05:06.356 ************************************ 00:05:06.356 END TEST json_config_extra_key 00:05:06.356 ************************************ 00:05:06.356 19:06:50 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.356 19:06:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:06.356 19:06:50 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:06.356 19:06:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.356 19:06:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.356 19:06:50 -- common/autotest_common.sh@10 -- # set +x 00:05:06.356 ************************************ 00:05:06.356 START TEST alias_rpc 00:05:06.356 ************************************ 00:05:06.356 19:06:50 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:06.614 * Looking for test storage... 00:05:06.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:06.614 19:06:50 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:06.614 19:06:50 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:06.614 19:06:50 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:06.614 19:06:50 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:06.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.614 19:06:50 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:06.614 19:06:50 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.614 19:06:50 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:06.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.614 --rc genhtml_branch_coverage=1 00:05:06.614 --rc genhtml_function_coverage=1 00:05:06.614 --rc genhtml_legend=1 00:05:06.614 --rc geninfo_all_blocks=1 00:05:06.614 --rc geninfo_unexecuted_blocks=1 00:05:06.614 00:05:06.614 ' 00:05:06.614 19:06:50 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:06.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.614 --rc genhtml_branch_coverage=1 00:05:06.614 --rc genhtml_function_coverage=1 00:05:06.614 --rc genhtml_legend=1 00:05:06.614 --rc geninfo_all_blocks=1 00:05:06.614 --rc geninfo_unexecuted_blocks=1 00:05:06.614 00:05:06.614 ' 00:05:06.614 19:06:50 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:06.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.614 --rc genhtml_branch_coverage=1 00:05:06.614 --rc genhtml_function_coverage=1 00:05:06.614 --rc genhtml_legend=1 00:05:06.614 --rc geninfo_all_blocks=1 00:05:06.614 --rc geninfo_unexecuted_blocks=1 00:05:06.614 00:05:06.614 ' 00:05:06.614 19:06:50 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:06.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.614 --rc genhtml_branch_coverage=1 00:05:06.614 --rc genhtml_function_coverage=1 00:05:06.614 --rc genhtml_legend=1 00:05:06.614 --rc geninfo_all_blocks=1 00:05:06.614 --rc geninfo_unexecuted_blocks=1 00:05:06.614 00:05:06.614 ' 00:05:06.614 19:06:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:06.614 19:06:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59740 00:05:06.614 19:06:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59740 00:05:06.614 19:06:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.614 19:06:50 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59740 ']' 00:05:06.614 19:06:50 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.614 19:06:50 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.614 19:06:50 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.614 19:06:50 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.614 19:06:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.614 [2024-12-16 19:06:50.880728] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:06.615 [2024-12-16 19:06:50.880848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59740 ] 00:05:06.872 [2024-12-16 19:06:51.026671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.872 [2024-12-16 19:06:51.106926] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.438 19:06:51 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.438 19:06:51 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:07.438 19:06:51 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:07.696 19:06:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59740 00:05:07.696 19:06:51 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59740 ']' 00:05:07.696 19:06:51 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59740 00:05:07.696 19:06:51 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:07.696 19:06:51 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.696 19:06:51 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59740 00:05:07.696 killing process with pid 59740 00:05:07.696 19:06:51 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.696 19:06:51 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.696 19:06:51 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59740' 00:05:07.696 19:06:51 alias_rpc -- common/autotest_common.sh@973 -- # kill 59740 00:05:07.696 19:06:51 alias_rpc -- common/autotest_common.sh@978 -- # wait 59740 00:05:09.073 ************************************ 00:05:09.073 END TEST alias_rpc 00:05:09.073 ************************************ 00:05:09.073 00:05:09.073 real 0m2.451s 00:05:09.073 user 0m2.546s 00:05:09.073 sys 0m0.375s 00:05:09.073 19:06:53 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.073 19:06:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.073 19:06:53 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:09.073 19:06:53 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:09.073 19:06:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.073 19:06:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.073 19:06:53 -- common/autotest_common.sh@10 -- # set +x 00:05:09.073 ************************************ 00:05:09.073 START TEST spdkcli_tcp 00:05:09.073 ************************************ 00:05:09.073 19:06:53 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:09.073 * Looking for test storage... 00:05:09.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:09.073 19:06:53 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:09.073 19:06:53 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:09.073 19:06:53 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:09.073 19:06:53 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.073 19:06:53 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:09.073 19:06:53 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.073 19:06:53 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:09.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.073 --rc genhtml_branch_coverage=1 00:05:09.073 --rc genhtml_function_coverage=1 00:05:09.073 --rc genhtml_legend=1 00:05:09.073 --rc geninfo_all_blocks=1 00:05:09.073 --rc geninfo_unexecuted_blocks=1 00:05:09.073 00:05:09.073 ' 00:05:09.073 19:06:53 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:09.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.073 --rc genhtml_branch_coverage=1 00:05:09.073 --rc genhtml_function_coverage=1 00:05:09.073 --rc genhtml_legend=1 00:05:09.073 --rc geninfo_all_blocks=1 00:05:09.073 --rc geninfo_unexecuted_blocks=1 00:05:09.073 00:05:09.073 ' 00:05:09.073 19:06:53 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:09.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.073 --rc genhtml_branch_coverage=1 00:05:09.073 --rc genhtml_function_coverage=1 00:05:09.073 --rc genhtml_legend=1 00:05:09.073 --rc geninfo_all_blocks=1 00:05:09.073 --rc geninfo_unexecuted_blocks=1 00:05:09.073 00:05:09.073 ' 00:05:09.073 19:06:53 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:09.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.073 --rc genhtml_branch_coverage=1 00:05:09.073 --rc genhtml_function_coverage=1 00:05:09.073 --rc genhtml_legend=1 00:05:09.073 --rc geninfo_all_blocks=1 00:05:09.073 --rc geninfo_unexecuted_blocks=1 00:05:09.073 00:05:09.073 ' 00:05:09.073 19:06:53 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:09.073 19:06:53 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:09.073 19:06:53 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:09.073 19:06:53 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:09.073 19:06:53 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:09.073 19:06:53 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:09.073 19:06:53 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:09.073 19:06:53 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:09.073 19:06:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.073 19:06:53 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59831 00:05:09.073 19:06:53 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59831 00:05:09.073 19:06:53 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59831 ']' 00:05:09.073 19:06:53 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.073 19:06:53 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.073 19:06:53 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.073 19:06:53 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.073 19:06:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.073 19:06:53 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:09.073 [2024-12-16 19:06:53.381989] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:09.073 [2024-12-16 19:06:53.382125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59831 ] 00:05:09.332 [2024-12-16 19:06:53.540133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.332 [2024-12-16 19:06:53.624307] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.332 [2024-12-16 19:06:53.624445] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.898 19:06:54 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.898 19:06:54 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:09.898 19:06:54 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59842 00:05:09.899 19:06:54 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:09.899 19:06:54 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:10.158 [ 00:05:10.158 "bdev_malloc_delete", 00:05:10.158 "bdev_malloc_create", 00:05:10.158 "bdev_null_resize", 00:05:10.158 "bdev_null_delete", 00:05:10.158 "bdev_null_create", 00:05:10.158 "bdev_nvme_cuse_unregister", 00:05:10.158 "bdev_nvme_cuse_register", 00:05:10.158 "bdev_opal_new_user", 00:05:10.158 "bdev_opal_set_lock_state", 00:05:10.158 "bdev_opal_delete", 00:05:10.158 "bdev_opal_get_info", 00:05:10.158 "bdev_opal_create", 00:05:10.158 "bdev_nvme_opal_revert", 00:05:10.158 "bdev_nvme_opal_init", 00:05:10.158 "bdev_nvme_send_cmd", 00:05:10.158 "bdev_nvme_set_keys", 00:05:10.158 "bdev_nvme_get_path_iostat", 00:05:10.158 "bdev_nvme_get_mdns_discovery_info", 00:05:10.158 "bdev_nvme_stop_mdns_discovery", 00:05:10.158 "bdev_nvme_start_mdns_discovery", 00:05:10.158 "bdev_nvme_set_multipath_policy", 00:05:10.158 "bdev_nvme_set_preferred_path", 00:05:10.158 "bdev_nvme_get_io_paths", 00:05:10.158 "bdev_nvme_remove_error_injection", 00:05:10.158 "bdev_nvme_add_error_injection", 00:05:10.158 "bdev_nvme_get_discovery_info", 00:05:10.158 "bdev_nvme_stop_discovery", 00:05:10.158 "bdev_nvme_start_discovery", 00:05:10.158 "bdev_nvme_get_controller_health_info", 00:05:10.158 "bdev_nvme_disable_controller", 00:05:10.158 "bdev_nvme_enable_controller", 00:05:10.158 "bdev_nvme_reset_controller", 00:05:10.158 "bdev_nvme_get_transport_statistics", 00:05:10.158 "bdev_nvme_apply_firmware", 00:05:10.158 "bdev_nvme_detach_controller", 00:05:10.158 "bdev_nvme_get_controllers", 00:05:10.158 "bdev_nvme_attach_controller", 00:05:10.158 "bdev_nvme_set_hotplug", 00:05:10.158 "bdev_nvme_set_options", 00:05:10.158 "bdev_passthru_delete", 00:05:10.158 "bdev_passthru_create", 00:05:10.158 "bdev_lvol_set_parent_bdev", 00:05:10.158 "bdev_lvol_set_parent", 00:05:10.158 "bdev_lvol_check_shallow_copy", 00:05:10.158 "bdev_lvol_start_shallow_copy", 00:05:10.158 "bdev_lvol_grow_lvstore", 00:05:10.158 "bdev_lvol_get_lvols", 00:05:10.158 "bdev_lvol_get_lvstores", 00:05:10.158 "bdev_lvol_delete", 00:05:10.158 "bdev_lvol_set_read_only", 00:05:10.158 "bdev_lvol_resize", 00:05:10.158 "bdev_lvol_decouple_parent", 00:05:10.158 "bdev_lvol_inflate", 00:05:10.158 "bdev_lvol_rename", 00:05:10.158 "bdev_lvol_clone_bdev", 00:05:10.158 "bdev_lvol_clone", 00:05:10.158 "bdev_lvol_snapshot", 00:05:10.158 "bdev_lvol_create", 00:05:10.158 "bdev_lvol_delete_lvstore", 00:05:10.158 "bdev_lvol_rename_lvstore", 00:05:10.158 "bdev_lvol_create_lvstore", 00:05:10.158 "bdev_raid_set_options", 00:05:10.158 "bdev_raid_remove_base_bdev", 00:05:10.158 "bdev_raid_add_base_bdev", 00:05:10.158 "bdev_raid_delete", 00:05:10.158 "bdev_raid_create", 00:05:10.158 "bdev_raid_get_bdevs", 00:05:10.158 "bdev_error_inject_error", 00:05:10.158 "bdev_error_delete", 00:05:10.158 "bdev_error_create", 00:05:10.158 "bdev_split_delete", 00:05:10.158 "bdev_split_create", 00:05:10.158 "bdev_delay_delete", 00:05:10.158 "bdev_delay_create", 00:05:10.158 "bdev_delay_update_latency", 00:05:10.158 "bdev_zone_block_delete", 00:05:10.158 "bdev_zone_block_create", 00:05:10.158 "blobfs_create", 00:05:10.158 "blobfs_detect", 00:05:10.158 "blobfs_set_cache_size", 00:05:10.158 "bdev_xnvme_delete", 00:05:10.158 "bdev_xnvme_create", 00:05:10.158 "bdev_aio_delete", 00:05:10.158 "bdev_aio_rescan", 00:05:10.158 "bdev_aio_create", 00:05:10.158 "bdev_ftl_set_property", 00:05:10.158 "bdev_ftl_get_properties", 00:05:10.158 "bdev_ftl_get_stats", 00:05:10.158 "bdev_ftl_unmap", 00:05:10.158 "bdev_ftl_unload", 00:05:10.158 "bdev_ftl_delete", 00:05:10.158 "bdev_ftl_load", 00:05:10.158 "bdev_ftl_create", 00:05:10.158 "bdev_virtio_attach_controller", 00:05:10.158 "bdev_virtio_scsi_get_devices", 00:05:10.158 "bdev_virtio_detach_controller", 00:05:10.158 "bdev_virtio_blk_set_hotplug", 00:05:10.158 "bdev_iscsi_delete", 00:05:10.158 "bdev_iscsi_create", 00:05:10.158 "bdev_iscsi_set_options", 00:05:10.158 "accel_error_inject_error", 00:05:10.158 "ioat_scan_accel_module", 00:05:10.158 "dsa_scan_accel_module", 00:05:10.158 "iaa_scan_accel_module", 00:05:10.158 "keyring_file_remove_key", 00:05:10.158 "keyring_file_add_key", 00:05:10.158 "keyring_linux_set_options", 00:05:10.158 "fsdev_aio_delete", 00:05:10.158 "fsdev_aio_create", 00:05:10.158 "iscsi_get_histogram", 00:05:10.158 "iscsi_enable_histogram", 00:05:10.158 "iscsi_set_options", 00:05:10.158 "iscsi_get_auth_groups", 00:05:10.158 "iscsi_auth_group_remove_secret", 00:05:10.158 "iscsi_auth_group_add_secret", 00:05:10.158 "iscsi_delete_auth_group", 00:05:10.158 "iscsi_create_auth_group", 00:05:10.158 "iscsi_set_discovery_auth", 00:05:10.159 "iscsi_get_options", 00:05:10.159 "iscsi_target_node_request_logout", 00:05:10.159 "iscsi_target_node_set_redirect", 00:05:10.159 "iscsi_target_node_set_auth", 00:05:10.159 "iscsi_target_node_add_lun", 00:05:10.159 "iscsi_get_stats", 00:05:10.159 "iscsi_get_connections", 00:05:10.159 "iscsi_portal_group_set_auth", 00:05:10.159 "iscsi_start_portal_group", 00:05:10.159 "iscsi_delete_portal_group", 00:05:10.159 "iscsi_create_portal_group", 00:05:10.159 "iscsi_get_portal_groups", 00:05:10.159 "iscsi_delete_target_node", 00:05:10.159 "iscsi_target_node_remove_pg_ig_maps", 00:05:10.159 "iscsi_target_node_add_pg_ig_maps", 00:05:10.159 "iscsi_create_target_node", 00:05:10.159 "iscsi_get_target_nodes", 00:05:10.159 "iscsi_delete_initiator_group", 00:05:10.159 "iscsi_initiator_group_remove_initiators", 00:05:10.159 "iscsi_initiator_group_add_initiators", 00:05:10.159 "iscsi_create_initiator_group", 00:05:10.159 "iscsi_get_initiator_groups", 00:05:10.159 "nvmf_set_crdt", 00:05:10.159 "nvmf_set_config", 00:05:10.159 "nvmf_set_max_subsystems", 00:05:10.159 "nvmf_stop_mdns_prr", 00:05:10.159 "nvmf_publish_mdns_prr", 00:05:10.159 "nvmf_subsystem_get_listeners", 00:05:10.159 "nvmf_subsystem_get_qpairs", 00:05:10.159 "nvmf_subsystem_get_controllers", 00:05:10.159 "nvmf_get_stats", 00:05:10.159 "nvmf_get_transports", 00:05:10.159 "nvmf_create_transport", 00:05:10.159 "nvmf_get_targets", 00:05:10.159 "nvmf_delete_target", 00:05:10.159 "nvmf_create_target", 00:05:10.159 "nvmf_subsystem_allow_any_host", 00:05:10.159 "nvmf_subsystem_set_keys", 00:05:10.159 "nvmf_subsystem_remove_host", 00:05:10.159 "nvmf_subsystem_add_host", 00:05:10.159 "nvmf_ns_remove_host", 00:05:10.159 "nvmf_ns_add_host", 00:05:10.159 "nvmf_subsystem_remove_ns", 00:05:10.159 "nvmf_subsystem_set_ns_ana_group", 00:05:10.159 "nvmf_subsystem_add_ns", 00:05:10.159 "nvmf_subsystem_listener_set_ana_state", 00:05:10.159 "nvmf_discovery_get_referrals", 00:05:10.159 "nvmf_discovery_remove_referral", 00:05:10.159 "nvmf_discovery_add_referral", 00:05:10.159 "nvmf_subsystem_remove_listener", 00:05:10.159 "nvmf_subsystem_add_listener", 00:05:10.159 "nvmf_delete_subsystem", 00:05:10.159 "nvmf_create_subsystem", 00:05:10.159 "nvmf_get_subsystems", 00:05:10.159 "env_dpdk_get_mem_stats", 00:05:10.159 "nbd_get_disks", 00:05:10.159 "nbd_stop_disk", 00:05:10.159 "nbd_start_disk", 00:05:10.159 "ublk_recover_disk", 00:05:10.159 "ublk_get_disks", 00:05:10.159 "ublk_stop_disk", 00:05:10.159 "ublk_start_disk", 00:05:10.159 "ublk_destroy_target", 00:05:10.159 "ublk_create_target", 00:05:10.159 "virtio_blk_create_transport", 00:05:10.159 "virtio_blk_get_transports", 00:05:10.159 "vhost_controller_set_coalescing", 00:05:10.159 "vhost_get_controllers", 00:05:10.159 "vhost_delete_controller", 00:05:10.159 "vhost_create_blk_controller", 00:05:10.159 "vhost_scsi_controller_remove_target", 00:05:10.159 "vhost_scsi_controller_add_target", 00:05:10.159 "vhost_start_scsi_controller", 00:05:10.159 "vhost_create_scsi_controller", 00:05:10.159 "thread_set_cpumask", 00:05:10.159 "scheduler_set_options", 00:05:10.159 "framework_get_governor", 00:05:10.159 "framework_get_scheduler", 00:05:10.159 "framework_set_scheduler", 00:05:10.159 "framework_get_reactors", 00:05:10.159 "thread_get_io_channels", 00:05:10.159 "thread_get_pollers", 00:05:10.159 "thread_get_stats", 00:05:10.159 "framework_monitor_context_switch", 00:05:10.159 "spdk_kill_instance", 00:05:10.159 "log_enable_timestamps", 00:05:10.159 "log_get_flags", 00:05:10.159 "log_clear_flag", 00:05:10.159 "log_set_flag", 00:05:10.159 "log_get_level", 00:05:10.159 "log_set_level", 00:05:10.159 "log_get_print_level", 00:05:10.159 "log_set_print_level", 00:05:10.159 "framework_enable_cpumask_locks", 00:05:10.159 "framework_disable_cpumask_locks", 00:05:10.159 "framework_wait_init", 00:05:10.159 "framework_start_init", 00:05:10.159 "scsi_get_devices", 00:05:10.159 "bdev_get_histogram", 00:05:10.159 "bdev_enable_histogram", 00:05:10.159 "bdev_set_qos_limit", 00:05:10.159 "bdev_set_qd_sampling_period", 00:05:10.159 "bdev_get_bdevs", 00:05:10.159 "bdev_reset_iostat", 00:05:10.159 "bdev_get_iostat", 00:05:10.159 "bdev_examine", 00:05:10.159 "bdev_wait_for_examine", 00:05:10.159 "bdev_set_options", 00:05:10.159 "accel_get_stats", 00:05:10.159 "accel_set_options", 00:05:10.159 "accel_set_driver", 00:05:10.159 "accel_crypto_key_destroy", 00:05:10.159 "accel_crypto_keys_get", 00:05:10.159 "accel_crypto_key_create", 00:05:10.159 "accel_assign_opc", 00:05:10.159 "accel_get_module_info", 00:05:10.159 "accel_get_opc_assignments", 00:05:10.159 "vmd_rescan", 00:05:10.159 "vmd_remove_device", 00:05:10.159 "vmd_enable", 00:05:10.159 "sock_get_default_impl", 00:05:10.159 "sock_set_default_impl", 00:05:10.159 "sock_impl_set_options", 00:05:10.159 "sock_impl_get_options", 00:05:10.159 "iobuf_get_stats", 00:05:10.159 "iobuf_set_options", 00:05:10.159 "keyring_get_keys", 00:05:10.159 "framework_get_pci_devices", 00:05:10.159 "framework_get_config", 00:05:10.159 "framework_get_subsystems", 00:05:10.159 "fsdev_set_opts", 00:05:10.159 "fsdev_get_opts", 00:05:10.159 "trace_get_info", 00:05:10.159 "trace_get_tpoint_group_mask", 00:05:10.159 "trace_disable_tpoint_group", 00:05:10.159 "trace_enable_tpoint_group", 00:05:10.159 "trace_clear_tpoint_mask", 00:05:10.159 "trace_set_tpoint_mask", 00:05:10.159 "notify_get_notifications", 00:05:10.159 "notify_get_types", 00:05:10.159 "spdk_get_version", 00:05:10.159 "rpc_get_methods" 00:05:10.159 ] 00:05:10.159 19:06:54 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:10.159 19:06:54 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:10.159 19:06:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.159 19:06:54 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:10.159 19:06:54 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59831 00:05:10.159 19:06:54 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59831 ']' 00:05:10.159 19:06:54 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59831 00:05:10.159 19:06:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:10.159 19:06:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.159 19:06:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59831 00:05:10.159 19:06:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.159 19:06:54 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.159 19:06:54 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59831' 00:05:10.159 killing process with pid 59831 00:05:10.159 19:06:54 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59831 00:05:10.159 19:06:54 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59831 00:05:12.063 00:05:12.063 real 0m2.894s 00:05:12.063 user 0m5.190s 00:05:12.063 sys 0m0.443s 00:05:12.063 19:06:56 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.063 19:06:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.063 ************************************ 00:05:12.063 END TEST spdkcli_tcp 00:05:12.063 ************************************ 00:05:12.063 19:06:56 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:12.063 19:06:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.063 19:06:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.063 19:06:56 -- common/autotest_common.sh@10 -- # set +x 00:05:12.063 ************************************ 00:05:12.063 START TEST dpdk_mem_utility 00:05:12.063 ************************************ 00:05:12.063 19:06:56 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:12.063 * Looking for test storage... 00:05:12.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:12.063 19:06:56 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:12.063 19:06:56 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:12.063 19:06:56 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:12.063 19:06:56 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.063 19:06:56 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:12.063 19:06:56 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.063 19:06:56 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:12.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.063 --rc genhtml_branch_coverage=1 00:05:12.063 --rc genhtml_function_coverage=1 00:05:12.063 --rc genhtml_legend=1 00:05:12.063 --rc geninfo_all_blocks=1 00:05:12.063 --rc geninfo_unexecuted_blocks=1 00:05:12.063 00:05:12.063 ' 00:05:12.063 19:06:56 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:12.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.063 --rc genhtml_branch_coverage=1 00:05:12.063 --rc genhtml_function_coverage=1 00:05:12.063 --rc genhtml_legend=1 00:05:12.063 --rc geninfo_all_blocks=1 00:05:12.063 --rc geninfo_unexecuted_blocks=1 00:05:12.063 00:05:12.063 ' 00:05:12.063 19:06:56 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:12.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.063 --rc genhtml_branch_coverage=1 00:05:12.063 --rc genhtml_function_coverage=1 00:05:12.063 --rc genhtml_legend=1 00:05:12.063 --rc geninfo_all_blocks=1 00:05:12.063 --rc geninfo_unexecuted_blocks=1 00:05:12.063 00:05:12.063 ' 00:05:12.063 19:06:56 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:12.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.064 --rc genhtml_branch_coverage=1 00:05:12.064 --rc genhtml_function_coverage=1 00:05:12.064 --rc genhtml_legend=1 00:05:12.064 --rc geninfo_all_blocks=1 00:05:12.064 --rc geninfo_unexecuted_blocks=1 00:05:12.064 00:05:12.064 ' 00:05:12.064 19:06:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:12.064 19:06:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59936 00:05:12.064 19:06:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.064 19:06:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59936 00:05:12.064 19:06:56 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59936 ']' 00:05:12.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.064 19:06:56 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.064 19:06:56 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.064 19:06:56 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.064 19:06:56 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.064 19:06:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:12.064 [2024-12-16 19:06:56.319755] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:12.064 [2024-12-16 19:06:56.319876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59936 ] 00:05:12.322 [2024-12-16 19:06:56.478018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.322 [2024-12-16 19:06:56.585975] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.888 19:06:57 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.888 19:06:57 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:12.888 19:06:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:12.889 19:06:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:12.889 19:06:57 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.889 19:06:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:12.889 { 00:05:12.889 "filename": "/tmp/spdk_mem_dump.txt" 00:05:12.889 } 00:05:12.889 19:06:57 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.889 19:06:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:13.151 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:13.151 1 heaps totaling size 824.000000 MiB 00:05:13.151 size: 824.000000 MiB heap id: 0 00:05:13.151 end heaps---------- 00:05:13.151 9 mempools totaling size 603.782043 MiB 00:05:13.151 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:13.151 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:13.151 size: 100.555481 MiB name: bdev_io_59936 00:05:13.151 size: 50.003479 MiB name: msgpool_59936 00:05:13.151 size: 36.509338 MiB name: fsdev_io_59936 00:05:13.151 size: 21.763794 MiB name: PDU_Pool 00:05:13.151 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:13.151 size: 4.133484 MiB name: evtpool_59936 00:05:13.151 size: 0.026123 MiB name: Session_Pool 00:05:13.151 end mempools------- 00:05:13.151 6 memzones totaling size 4.142822 MiB 00:05:13.151 size: 1.000366 MiB name: RG_ring_0_59936 00:05:13.151 size: 1.000366 MiB name: RG_ring_1_59936 00:05:13.151 size: 1.000366 MiB name: RG_ring_4_59936 00:05:13.151 size: 1.000366 MiB name: RG_ring_5_59936 00:05:13.151 size: 0.125366 MiB name: RG_ring_2_59936 00:05:13.151 size: 0.015991 MiB name: RG_ring_3_59936 00:05:13.151 end memzones------- 00:05:13.151 19:06:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:13.151 heap id: 0 total size: 824.000000 MiB number of busy elements: 320 number of free elements: 18 00:05:13.151 list of free elements. size: 16.780151 MiB 00:05:13.151 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:13.151 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:13.151 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:13.151 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:13.151 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:13.151 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:13.151 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:13.151 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:13.151 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:13.151 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:13.151 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:13.151 element at address: 0x20001b400000 with size: 0.560242 MiB 00:05:13.151 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:13.151 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:13.151 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:13.151 element at address: 0x200012c00000 with size: 0.433472 MiB 00:05:13.151 element at address: 0x200028800000 with size: 0.390686 MiB 00:05:13.151 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:13.151 list of standard malloc elements. size: 199.288940 MiB 00:05:13.151 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:13.151 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:13.151 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:13.151 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:13.151 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:13.151 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:13.151 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:13.151 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:13.151 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:13.151 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:13.151 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:13.151 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:13.151 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:13.151 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:13.151 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:13.152 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:13.153 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:13.153 element at address: 0x200028864140 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886ae00 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:13.153 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:13.153 list of memzone associated elements. size: 607.930908 MiB 00:05:13.153 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:13.153 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:13.153 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:13.153 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:13.153 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:13.153 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59936_0 00:05:13.153 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:13.153 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59936_0 00:05:13.153 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:13.153 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59936_0 00:05:13.153 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:13.153 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:13.153 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:13.153 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:13.153 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:13.153 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59936_0 00:05:13.153 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:13.153 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59936 00:05:13.153 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:13.153 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59936 00:05:13.153 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:13.153 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:13.153 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:13.153 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:13.153 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:13.153 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:13.153 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:13.153 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:13.153 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:13.153 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59936 00:05:13.153 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:13.153 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59936 00:05:13.153 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:13.153 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59936 00:05:13.153 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:13.153 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59936 00:05:13.153 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:13.153 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59936 00:05:13.153 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:13.153 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59936 00:05:13.153 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:13.153 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:13.153 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:13.153 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:13.153 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:13.153 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:13.153 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:13.153 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59936 00:05:13.153 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:13.153 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59936 00:05:13.153 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:13.153 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:13.153 element at address: 0x200028864240 with size: 0.023804 MiB 00:05:13.153 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:13.153 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:13.153 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59936 00:05:13.153 element at address: 0x20002886a3c0 with size: 0.002502 MiB 00:05:13.153 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:13.153 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:13.153 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59936 00:05:13.153 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:13.153 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59936 00:05:13.153 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:13.153 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59936 00:05:13.153 element at address: 0x20002886af00 with size: 0.000366 MiB 00:05:13.153 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:13.153 19:06:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:13.154 19:06:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59936 00:05:13.154 19:06:57 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59936 ']' 00:05:13.154 19:06:57 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59936 00:05:13.154 19:06:57 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:13.154 19:06:57 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.154 19:06:57 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59936 00:05:13.154 19:06:57 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.154 19:06:57 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.154 19:06:57 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59936' 00:05:13.154 killing process with pid 59936 00:05:13.154 19:06:57 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59936 00:05:13.154 19:06:57 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59936 00:05:15.063 00:05:15.063 real 0m2.853s 00:05:15.063 user 0m2.819s 00:05:15.063 sys 0m0.429s 00:05:15.063 19:06:58 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.063 ************************************ 00:05:15.063 END TEST dpdk_mem_utility 00:05:15.063 ************************************ 00:05:15.063 19:06:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.063 19:06:58 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:15.063 19:06:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.063 19:06:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.063 19:06:58 -- common/autotest_common.sh@10 -- # set +x 00:05:15.063 ************************************ 00:05:15.063 START TEST event 00:05:15.063 ************************************ 00:05:15.063 19:06:58 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:15.063 * Looking for test storage... 00:05:15.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:15.063 19:06:59 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:15.063 19:06:59 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:15.063 19:06:59 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:15.063 19:06:59 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:15.063 19:06:59 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.063 19:06:59 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.063 19:06:59 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.063 19:06:59 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.063 19:06:59 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.063 19:06:59 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.063 19:06:59 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.063 19:06:59 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.063 19:06:59 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.063 19:06:59 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.063 19:06:59 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.063 19:06:59 event -- scripts/common.sh@344 -- # case "$op" in 00:05:15.063 19:06:59 event -- scripts/common.sh@345 -- # : 1 00:05:15.063 19:06:59 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.063 19:06:59 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.063 19:06:59 event -- scripts/common.sh@365 -- # decimal 1 00:05:15.063 19:06:59 event -- scripts/common.sh@353 -- # local d=1 00:05:15.063 19:06:59 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.063 19:06:59 event -- scripts/common.sh@355 -- # echo 1 00:05:15.063 19:06:59 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.063 19:06:59 event -- scripts/common.sh@366 -- # decimal 2 00:05:15.063 19:06:59 event -- scripts/common.sh@353 -- # local d=2 00:05:15.063 19:06:59 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.063 19:06:59 event -- scripts/common.sh@355 -- # echo 2 00:05:15.063 19:06:59 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.063 19:06:59 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.064 19:06:59 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.064 19:06:59 event -- scripts/common.sh@368 -- # return 0 00:05:15.064 19:06:59 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.064 19:06:59 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:15.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.064 --rc genhtml_branch_coverage=1 00:05:15.064 --rc genhtml_function_coverage=1 00:05:15.064 --rc genhtml_legend=1 00:05:15.064 --rc geninfo_all_blocks=1 00:05:15.064 --rc geninfo_unexecuted_blocks=1 00:05:15.064 00:05:15.064 ' 00:05:15.064 19:06:59 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:15.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.064 --rc genhtml_branch_coverage=1 00:05:15.064 --rc genhtml_function_coverage=1 00:05:15.064 --rc genhtml_legend=1 00:05:15.064 --rc geninfo_all_blocks=1 00:05:15.064 --rc geninfo_unexecuted_blocks=1 00:05:15.064 00:05:15.064 ' 00:05:15.064 19:06:59 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:15.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.064 --rc genhtml_branch_coverage=1 00:05:15.064 --rc genhtml_function_coverage=1 00:05:15.064 --rc genhtml_legend=1 00:05:15.064 --rc geninfo_all_blocks=1 00:05:15.064 --rc geninfo_unexecuted_blocks=1 00:05:15.064 00:05:15.064 ' 00:05:15.064 19:06:59 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:15.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.064 --rc genhtml_branch_coverage=1 00:05:15.064 --rc genhtml_function_coverage=1 00:05:15.064 --rc genhtml_legend=1 00:05:15.064 --rc geninfo_all_blocks=1 00:05:15.064 --rc geninfo_unexecuted_blocks=1 00:05:15.064 00:05:15.064 ' 00:05:15.064 19:06:59 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:15.064 19:06:59 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:15.064 19:06:59 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:15.064 19:06:59 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:15.064 19:06:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.064 19:06:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.064 ************************************ 00:05:15.064 START TEST event_perf 00:05:15.064 ************************************ 00:05:15.064 19:06:59 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:15.064 Running I/O for 1 seconds...[2024-12-16 19:06:59.183396] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:15.064 [2024-12-16 19:06:59.183712] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60033 ] 00:05:15.064 [2024-12-16 19:06:59.342742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:15.326 [2024-12-16 19:06:59.458150] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.326 [2024-12-16 19:06:59.458352] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.326 [2024-12-16 19:06:59.458669] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.326 [2024-12-16 19:06:59.458686] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.264 Running I/O for 1 seconds... 00:05:16.264 lcore 0: 148970 00:05:16.264 lcore 1: 148972 00:05:16.264 lcore 2: 148966 00:05:16.264 lcore 3: 148968 00:05:16.523 done. 00:05:16.523 00:05:16.523 real 0m1.472s 00:05:16.523 user 0m4.258s 00:05:16.523 sys 0m0.094s 00:05:16.523 19:07:00 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.523 ************************************ 00:05:16.523 END TEST event_perf 00:05:16.523 ************************************ 00:05:16.523 19:07:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.523 19:07:00 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:16.523 19:07:00 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:16.523 19:07:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.523 19:07:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.523 ************************************ 00:05:16.523 START TEST event_reactor 00:05:16.523 ************************************ 00:05:16.523 19:07:00 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:16.523 [2024-12-16 19:07:00.700262] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:16.523 [2024-12-16 19:07:00.700360] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60073 ] 00:05:16.523 [2024-12-16 19:07:00.861150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.781 [2024-12-16 19:07:00.968975] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.777 test_start 00:05:17.777 oneshot 00:05:17.777 tick 100 00:05:17.777 tick 100 00:05:17.777 tick 250 00:05:17.777 tick 100 00:05:17.777 tick 100 00:05:17.777 tick 250 00:05:17.777 tick 100 00:05:17.777 tick 500 00:05:17.777 tick 100 00:05:17.777 tick 100 00:05:17.777 tick 250 00:05:17.777 tick 100 00:05:17.777 tick 100 00:05:17.777 test_end 00:05:18.035 00:05:18.035 real 0m1.463s 00:05:18.035 user 0m1.278s 00:05:18.035 sys 0m0.076s 00:05:18.035 19:07:02 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.035 19:07:02 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:18.035 ************************************ 00:05:18.035 END TEST event_reactor 00:05:18.035 ************************************ 00:05:18.035 19:07:02 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:18.035 19:07:02 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:18.035 19:07:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.035 19:07:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.035 ************************************ 00:05:18.035 START TEST event_reactor_perf 00:05:18.035 ************************************ 00:05:18.035 19:07:02 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:18.035 [2024-12-16 19:07:02.209550] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:18.035 [2024-12-16 19:07:02.209656] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60109 ] 00:05:18.035 [2024-12-16 19:07:02.368950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.294 [2024-12-16 19:07:02.475738] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.667 test_start 00:05:19.667 test_end 00:05:19.667 Performance: 316802 events per second 00:05:19.667 ************************************ 00:05:19.667 END TEST event_reactor_perf 00:05:19.667 ************************************ 00:05:19.667 00:05:19.667 real 0m1.458s 00:05:19.667 user 0m1.275s 00:05:19.667 sys 0m0.075s 00:05:19.667 19:07:03 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.667 19:07:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:19.667 19:07:03 event -- event/event.sh@49 -- # uname -s 00:05:19.667 19:07:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:19.667 19:07:03 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:19.667 19:07:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.667 19:07:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.667 19:07:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.667 ************************************ 00:05:19.667 START TEST event_scheduler 00:05:19.667 ************************************ 00:05:19.667 19:07:03 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:19.667 * Looking for test storage... 00:05:19.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:19.667 19:07:03 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:19.667 19:07:03 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:19.667 19:07:03 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:19.667 19:07:03 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:19.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.667 19:07:03 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:19.667 19:07:03 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.667 19:07:03 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:19.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.667 --rc genhtml_branch_coverage=1 00:05:19.667 --rc genhtml_function_coverage=1 00:05:19.667 --rc genhtml_legend=1 00:05:19.667 --rc geninfo_all_blocks=1 00:05:19.667 --rc geninfo_unexecuted_blocks=1 00:05:19.667 00:05:19.667 ' 00:05:19.667 19:07:03 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:19.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.668 --rc genhtml_branch_coverage=1 00:05:19.668 --rc genhtml_function_coverage=1 00:05:19.668 --rc genhtml_legend=1 00:05:19.668 --rc geninfo_all_blocks=1 00:05:19.668 --rc geninfo_unexecuted_blocks=1 00:05:19.668 00:05:19.668 ' 00:05:19.668 19:07:03 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:19.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.668 --rc genhtml_branch_coverage=1 00:05:19.668 --rc genhtml_function_coverage=1 00:05:19.668 --rc genhtml_legend=1 00:05:19.668 --rc geninfo_all_blocks=1 00:05:19.668 --rc geninfo_unexecuted_blocks=1 00:05:19.668 00:05:19.668 ' 00:05:19.668 19:07:03 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:19.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.668 --rc genhtml_branch_coverage=1 00:05:19.668 --rc genhtml_function_coverage=1 00:05:19.668 --rc genhtml_legend=1 00:05:19.668 --rc geninfo_all_blocks=1 00:05:19.668 --rc geninfo_unexecuted_blocks=1 00:05:19.668 00:05:19.668 ' 00:05:19.668 19:07:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:19.668 19:07:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60180 00:05:19.668 19:07:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.668 19:07:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60180 00:05:19.668 19:07:03 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60180 ']' 00:05:19.668 19:07:03 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.668 19:07:03 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.668 19:07:03 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.668 19:07:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:19.668 19:07:03 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.668 19:07:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:19.668 [2024-12-16 19:07:03.892266] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:19.668 [2024-12-16 19:07:03.892386] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60180 ] 00:05:19.925 [2024-12-16 19:07:04.053711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:19.925 [2024-12-16 19:07:04.173805] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.925 [2024-12-16 19:07:04.174164] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.925 [2024-12-16 19:07:04.174219] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:19.925 [2024-12-16 19:07:04.174315] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.492 19:07:04 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.492 19:07:04 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:20.492 19:07:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:20.492 19:07:04 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.492 19:07:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.492 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:20.492 POWER: Cannot set governor of lcore 0 to userspace 00:05:20.492 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:20.492 POWER: Cannot set governor of lcore 0 to performance 00:05:20.492 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:20.492 POWER: Cannot set governor of lcore 0 to userspace 00:05:20.492 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:20.492 POWER: Cannot set governor of lcore 0 to userspace 00:05:20.492 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:20.492 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:20.492 POWER: Unable to set Power Management Environment for lcore 0 00:05:20.492 [2024-12-16 19:07:04.736500] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:20.492 [2024-12-16 19:07:04.736541] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:20.492 [2024-12-16 19:07:04.736563] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:20.492 [2024-12-16 19:07:04.736595] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:20.492 [2024-12-16 19:07:04.736659] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:20.492 [2024-12-16 19:07:04.736684] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:20.492 19:07:04 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.492 19:07:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:20.492 19:07:04 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.492 19:07:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.751 [2024-12-16 19:07:04.986106] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:20.751 19:07:04 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.751 19:07:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:20.751 19:07:04 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.751 19:07:04 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.751 19:07:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.751 ************************************ 00:05:20.751 START TEST scheduler_create_thread 00:05:20.751 ************************************ 00:05:20.751 19:07:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:20.751 19:07:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:20.751 19:07:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.751 19:07:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.751 2 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.751 3 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.751 4 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.751 5 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.751 6 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.751 7 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.751 8 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.751 9 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.751 10 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.751 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.009 ************************************ 00:05:21.009 END TEST scheduler_create_thread 00:05:21.009 ************************************ 00:05:21.009 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.009 00:05:21.009 real 0m0.107s 00:05:21.009 user 0m0.008s 00:05:21.009 sys 0m0.005s 00:05:21.009 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.009 19:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.009 19:07:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:21.009 19:07:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60180 00:05:21.009 19:07:05 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60180 ']' 00:05:21.009 19:07:05 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60180 00:05:21.009 19:07:05 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:21.009 19:07:05 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.009 19:07:05 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60180 00:05:21.009 killing process with pid 60180 00:05:21.009 19:07:05 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:21.009 19:07:05 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:21.009 19:07:05 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60180' 00:05:21.009 19:07:05 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60180 00:05:21.009 19:07:05 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60180 00:05:21.267 [2024-12-16 19:07:05.586734] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:22.202 00:05:22.202 real 0m2.524s 00:05:22.202 user 0m4.247s 00:05:22.202 sys 0m0.363s 00:05:22.202 ************************************ 00:05:22.202 END TEST event_scheduler 00:05:22.202 ************************************ 00:05:22.202 19:07:06 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.202 19:07:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.202 19:07:06 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:22.202 19:07:06 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:22.202 19:07:06 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.202 19:07:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.202 19:07:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.202 ************************************ 00:05:22.202 START TEST app_repeat 00:05:22.202 ************************************ 00:05:22.202 19:07:06 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:22.202 19:07:06 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.202 19:07:06 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.202 19:07:06 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:22.202 19:07:06 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.202 19:07:06 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:22.202 19:07:06 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:22.202 19:07:06 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:22.202 19:07:06 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60253 00:05:22.202 19:07:06 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.202 19:07:06 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60253' 00:05:22.202 Process app_repeat pid: 60253 00:05:22.202 spdk_app_start Round 0 00:05:22.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.202 19:07:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.203 19:07:06 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:22.203 19:07:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:22.203 19:07:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60253 /var/tmp/spdk-nbd.sock 00:05:22.203 19:07:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60253 ']' 00:05:22.203 19:07:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.203 19:07:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.203 19:07:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.203 19:07:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.203 19:07:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.203 [2024-12-16 19:07:06.302463] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:22.203 [2024-12-16 19:07:06.302566] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60253 ] 00:05:22.203 [2024-12-16 19:07:06.459570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.464 [2024-12-16 19:07:06.569646] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.464 [2024-12-16 19:07:06.569661] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.032 19:07:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.032 19:07:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:23.032 19:07:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.292 Malloc0 00:05:23.292 19:07:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.553 Malloc1 00:05:23.553 19:07:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.553 19:07:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.553 19:07:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.553 19:07:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.553 19:07:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.553 19:07:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.553 19:07:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.553 19:07:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.553 19:07:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.553 19:07:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.553 19:07:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.553 19:07:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.553 19:07:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.553 19:07:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.553 19:07:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.553 19:07:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.553 /dev/nbd0 00:05:23.553 19:07:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.553 19:07:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.553 19:07:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:23.553 19:07:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:23.553 19:07:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:23.553 19:07:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:23.553 19:07:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:23.553 19:07:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:23.553 19:07:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:23.553 19:07:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:23.553 19:07:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.553 1+0 records in 00:05:23.553 1+0 records out 00:05:23.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371615 s, 11.0 MB/s 00:05:23.553 19:07:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.553 19:07:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:23.553 19:07:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.553 19:07:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:23.553 19:07:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:23.553 19:07:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.553 19:07:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.553 19:07:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.812 /dev/nbd1 00:05:23.812 19:07:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.812 19:07:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.812 19:07:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:23.812 19:07:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:23.812 19:07:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:23.812 19:07:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:23.812 19:07:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:23.812 19:07:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:23.812 19:07:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:23.812 19:07:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:23.812 19:07:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.812 1+0 records in 00:05:23.812 1+0 records out 00:05:23.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000165397 s, 24.8 MB/s 00:05:23.812 19:07:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.812 19:07:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:23.812 19:07:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.812 19:07:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:23.812 19:07:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:23.812 19:07:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.812 19:07:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.812 19:07:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.812 19:07:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.812 19:07:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.070 19:07:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.070 { 00:05:24.070 "nbd_device": "/dev/nbd0", 00:05:24.070 "bdev_name": "Malloc0" 00:05:24.070 }, 00:05:24.070 { 00:05:24.070 "nbd_device": "/dev/nbd1", 00:05:24.070 "bdev_name": "Malloc1" 00:05:24.070 } 00:05:24.070 ]' 00:05:24.070 19:07:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.070 19:07:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.070 { 00:05:24.070 "nbd_device": "/dev/nbd0", 00:05:24.070 "bdev_name": "Malloc0" 00:05:24.070 }, 00:05:24.070 { 00:05:24.070 "nbd_device": "/dev/nbd1", 00:05:24.070 "bdev_name": "Malloc1" 00:05:24.070 } 00:05:24.070 ]' 00:05:24.070 19:07:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.070 /dev/nbd1' 00:05:24.070 19:07:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.070 /dev/nbd1' 00:05:24.070 19:07:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.070 19:07:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.070 19:07:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.070 19:07:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.070 19:07:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.070 19:07:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.070 19:07:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.070 19:07:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.070 19:07:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.070 19:07:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.070 19:07:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.070 19:07:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.070 256+0 records in 00:05:24.070 256+0 records out 00:05:24.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00695142 s, 151 MB/s 00:05:24.070 19:07:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.070 19:07:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.070 256+0 records in 00:05:24.070 256+0 records out 00:05:24.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214659 s, 48.8 MB/s 00:05:24.070 19:07:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.070 19:07:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.070 256+0 records in 00:05:24.070 256+0 records out 00:05:24.071 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0186353 s, 56.3 MB/s 00:05:24.071 19:07:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.071 19:07:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.071 19:07:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.071 19:07:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.071 19:07:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.071 19:07:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.071 19:07:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.071 19:07:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.071 19:07:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.071 19:07:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.071 19:07:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.071 19:07:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.071 19:07:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.071 19:07:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.071 19:07:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.071 19:07:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.071 19:07:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.071 19:07:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.071 19:07:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.329 19:07:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.329 19:07:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.329 19:07:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.329 19:07:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.329 19:07:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.329 19:07:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.329 19:07:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.329 19:07:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.329 19:07:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.329 19:07:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:24.636 19:07:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.636 19:07:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.636 19:07:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.636 19:07:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.636 19:07:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.636 19:07:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.636 19:07:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.636 19:07:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.636 19:07:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.636 19:07:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.636 19:07:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.895 19:07:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.895 19:07:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.895 19:07:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.895 19:07:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.895 19:07:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.895 19:07:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.895 19:07:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:24.895 19:07:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.895 19:07:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.895 19:07:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.895 19:07:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.895 19:07:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.895 19:07:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.153 19:07:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:25.719 [2024-12-16 19:07:09.917550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.719 [2024-12-16 19:07:09.993618] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.719 [2024-12-16 19:07:09.993652] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.978 [2024-12-16 19:07:10.097720] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.978 [2024-12-16 19:07:10.097772] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.507 spdk_app_start Round 1 00:05:28.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.507 19:07:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:28.507 19:07:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:28.507 19:07:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60253 /var/tmp/spdk-nbd.sock 00:05:28.507 19:07:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60253 ']' 00:05:28.507 19:07:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.507 19:07:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.507 19:07:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.507 19:07:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.507 19:07:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.507 19:07:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.507 19:07:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:28.507 19:07:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.507 Malloc0 00:05:28.507 19:07:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.766 Malloc1 00:05:28.766 19:07:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.766 19:07:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.766 19:07:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.766 19:07:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:28.766 19:07:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.766 19:07:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:28.766 19:07:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.766 19:07:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.766 19:07:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.766 19:07:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:28.766 19:07:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.766 19:07:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:28.766 19:07:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:28.766 19:07:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:28.766 19:07:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.766 19:07:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:29.024 /dev/nbd0 00:05:29.024 19:07:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:29.024 19:07:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:29.024 19:07:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:29.024 19:07:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:29.024 19:07:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:29.024 19:07:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:29.024 19:07:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:29.024 19:07:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:29.024 19:07:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:29.024 19:07:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:29.024 19:07:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.024 1+0 records in 00:05:29.024 1+0 records out 00:05:29.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183229 s, 22.4 MB/s 00:05:29.024 19:07:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.024 19:07:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:29.024 19:07:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.024 19:07:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:29.024 19:07:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:29.024 19:07:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.024 19:07:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.024 19:07:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:29.283 /dev/nbd1 00:05:29.283 19:07:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:29.283 19:07:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:29.283 19:07:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:29.283 19:07:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:29.283 19:07:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:29.283 19:07:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:29.283 19:07:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:29.283 19:07:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:29.283 19:07:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:29.283 19:07:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:29.283 19:07:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.283 1+0 records in 00:05:29.283 1+0 records out 00:05:29.283 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181981 s, 22.5 MB/s 00:05:29.283 19:07:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.283 19:07:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:29.283 19:07:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.283 19:07:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:29.283 19:07:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:29.283 19:07:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.283 19:07:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.283 19:07:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.283 19:07:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.283 19:07:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:29.542 { 00:05:29.542 "nbd_device": "/dev/nbd0", 00:05:29.542 "bdev_name": "Malloc0" 00:05:29.542 }, 00:05:29.542 { 00:05:29.542 "nbd_device": "/dev/nbd1", 00:05:29.542 "bdev_name": "Malloc1" 00:05:29.542 } 00:05:29.542 ]' 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:29.542 { 00:05:29.542 "nbd_device": "/dev/nbd0", 00:05:29.542 "bdev_name": "Malloc0" 00:05:29.542 }, 00:05:29.542 { 00:05:29.542 "nbd_device": "/dev/nbd1", 00:05:29.542 "bdev_name": "Malloc1" 00:05:29.542 } 00:05:29.542 ]' 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:29.542 /dev/nbd1' 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:29.542 /dev/nbd1' 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:29.542 256+0 records in 00:05:29.542 256+0 records out 00:05:29.542 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0080356 s, 130 MB/s 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:29.542 256+0 records in 00:05:29.542 256+0 records out 00:05:29.542 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109802 s, 95.5 MB/s 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:29.542 256+0 records in 00:05:29.542 256+0 records out 00:05:29.542 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161267 s, 65.0 MB/s 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.542 19:07:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:29.543 19:07:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:29.543 19:07:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:29.543 19:07:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:29.543 19:07:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.543 19:07:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:29.543 19:07:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.543 19:07:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:29.543 19:07:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:29.543 19:07:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:29.543 19:07:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.543 19:07:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.543 19:07:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:29.543 19:07:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:29.543 19:07:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.543 19:07:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:29.801 19:07:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:29.801 19:07:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:29.801 19:07:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:29.801 19:07:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.801 19:07:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.801 19:07:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:29.801 19:07:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.801 19:07:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.801 19:07:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.801 19:07:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:30.059 19:07:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:30.059 19:07:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:30.059 19:07:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:30.059 19:07:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.059 19:07:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.059 19:07:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:30.059 19:07:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.059 19:07:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.059 19:07:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.059 19:07:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.059 19:07:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.318 19:07:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:30.318 19:07:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:30.318 19:07:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.318 19:07:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:30.318 19:07:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.318 19:07:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:30.318 19:07:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:30.318 19:07:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:30.318 19:07:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:30.318 19:07:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:30.318 19:07:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:30.318 19:07:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:30.318 19:07:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:30.576 19:07:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:31.141 [2024-12-16 19:07:15.297391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.141 [2024-12-16 19:07:15.375608] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.141 [2024-12-16 19:07:15.375718] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.141 [2024-12-16 19:07:15.477951] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:31.141 [2024-12-16 19:07:15.478023] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:33.800 spdk_app_start Round 2 00:05:33.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.800 19:07:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:33.800 19:07:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:33.800 19:07:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60253 /var/tmp/spdk-nbd.sock 00:05:33.800 19:07:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60253 ']' 00:05:33.800 19:07:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.800 19:07:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.800 19:07:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.800 19:07:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.800 19:07:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.800 19:07:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.800 19:07:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:33.800 19:07:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.058 Malloc0 00:05:34.058 19:07:18 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.316 Malloc1 00:05:34.316 19:07:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.316 19:07:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.316 19:07:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.316 19:07:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:34.316 19:07:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.316 19:07:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:34.316 19:07:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.316 19:07:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.316 19:07:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.316 19:07:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:34.316 19:07:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.316 19:07:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:34.316 19:07:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:34.316 19:07:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:34.316 19:07:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.316 19:07:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:34.316 /dev/nbd0 00:05:34.316 19:07:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:34.316 19:07:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:34.316 19:07:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:34.316 19:07:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:34.316 19:07:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:34.316 19:07:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:34.316 19:07:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:34.316 19:07:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:34.316 19:07:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:34.316 19:07:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:34.316 19:07:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.316 1+0 records in 00:05:34.316 1+0 records out 00:05:34.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468969 s, 8.7 MB/s 00:05:34.316 19:07:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.575 19:07:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:34.575 19:07:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.575 19:07:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:34.575 19:07:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:34.575 19:07:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.575 19:07:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.575 19:07:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:34.575 /dev/nbd1 00:05:34.575 19:07:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:34.575 19:07:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:34.575 19:07:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:34.575 19:07:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:34.575 19:07:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:34.575 19:07:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:34.575 19:07:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:34.575 19:07:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:34.575 19:07:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:34.575 19:07:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:34.575 19:07:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.575 1+0 records in 00:05:34.575 1+0 records out 00:05:34.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017795 s, 23.0 MB/s 00:05:34.575 19:07:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.575 19:07:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:34.575 19:07:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.575 19:07:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:34.575 19:07:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:34.575 19:07:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.575 19:07:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.575 19:07:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.575 19:07:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.575 19:07:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.833 19:07:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.833 { 00:05:34.833 "nbd_device": "/dev/nbd0", 00:05:34.833 "bdev_name": "Malloc0" 00:05:34.833 }, 00:05:34.833 { 00:05:34.833 "nbd_device": "/dev/nbd1", 00:05:34.833 "bdev_name": "Malloc1" 00:05:34.833 } 00:05:34.833 ]' 00:05:34.833 19:07:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.833 19:07:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.833 { 00:05:34.833 "nbd_device": "/dev/nbd0", 00:05:34.833 "bdev_name": "Malloc0" 00:05:34.833 }, 00:05:34.833 { 00:05:34.833 "nbd_device": "/dev/nbd1", 00:05:34.833 "bdev_name": "Malloc1" 00:05:34.833 } 00:05:34.833 ]' 00:05:34.833 19:07:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:34.833 /dev/nbd1' 00:05:34.833 19:07:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:34.833 /dev/nbd1' 00:05:34.833 19:07:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.833 19:07:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:34.833 19:07:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:34.833 19:07:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:34.833 19:07:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:34.833 19:07:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:34.833 19:07:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.833 19:07:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.833 19:07:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:34.833 19:07:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.833 19:07:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:34.834 256+0 records in 00:05:34.834 256+0 records out 00:05:34.834 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00824542 s, 127 MB/s 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:34.834 256+0 records in 00:05:34.834 256+0 records out 00:05:34.834 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169078 s, 62.0 MB/s 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:34.834 256+0 records in 00:05:34.834 256+0 records out 00:05:34.834 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020692 s, 50.7 MB/s 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.834 19:07:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:35.092 19:07:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:35.092 19:07:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:35.092 19:07:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:35.092 19:07:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.092 19:07:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.092 19:07:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:35.092 19:07:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.092 19:07:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.092 19:07:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.092 19:07:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:35.353 19:07:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:35.353 19:07:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:35.353 19:07:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:35.353 19:07:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.353 19:07:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.353 19:07:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:35.353 19:07:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.353 19:07:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.353 19:07:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.353 19:07:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.353 19:07:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.612 19:07:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:35.612 19:07:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:35.612 19:07:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.612 19:07:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:35.612 19:07:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:35.612 19:07:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.612 19:07:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:35.612 19:07:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:35.612 19:07:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:35.612 19:07:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:35.612 19:07:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:35.612 19:07:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:35.613 19:07:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:35.871 19:07:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:36.438 [2024-12-16 19:07:20.686445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.438 [2024-12-16 19:07:20.763390] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.438 [2024-12-16 19:07:20.763549] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.697 [2024-12-16 19:07:20.863988] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:36.697 [2024-12-16 19:07:20.864031] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:39.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:39.227 19:07:23 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60253 /var/tmp/spdk-nbd.sock 00:05:39.227 19:07:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60253 ']' 00:05:39.227 19:07:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:39.227 19:07:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.227 19:07:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:39.227 19:07:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.227 19:07:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:39.227 19:07:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.227 19:07:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:39.227 19:07:23 event.app_repeat -- event/event.sh@39 -- # killprocess 60253 00:05:39.227 19:07:23 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60253 ']' 00:05:39.227 19:07:23 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60253 00:05:39.227 19:07:23 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:39.227 19:07:23 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.227 19:07:23 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60253 00:05:39.227 killing process with pid 60253 00:05:39.227 19:07:23 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.227 19:07:23 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.227 19:07:23 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60253' 00:05:39.227 19:07:23 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60253 00:05:39.227 19:07:23 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60253 00:05:39.794 spdk_app_start is called in Round 0. 00:05:39.794 Shutdown signal received, stop current app iteration 00:05:39.794 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:39.794 spdk_app_start is called in Round 1. 00:05:39.794 Shutdown signal received, stop current app iteration 00:05:39.794 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:39.794 spdk_app_start is called in Round 2. 00:05:39.794 Shutdown signal received, stop current app iteration 00:05:39.794 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:39.794 spdk_app_start is called in Round 3. 00:05:39.794 Shutdown signal received, stop current app iteration 00:05:39.794 19:07:23 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:39.794 ************************************ 00:05:39.794 END TEST app_repeat 00:05:39.794 ************************************ 00:05:39.794 19:07:23 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:39.794 00:05:39.794 real 0m17.626s 00:05:39.794 user 0m38.612s 00:05:39.794 sys 0m2.072s 00:05:39.794 19:07:23 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.794 19:07:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:39.794 19:07:23 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:39.794 19:07:23 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:39.794 19:07:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.794 19:07:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.794 19:07:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.794 ************************************ 00:05:39.794 START TEST cpu_locks 00:05:39.794 ************************************ 00:05:39.794 19:07:23 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:39.794 * Looking for test storage... 00:05:39.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:39.794 19:07:23 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:39.794 19:07:23 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:39.795 19:07:23 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:39.795 19:07:24 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.795 19:07:24 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:39.795 19:07:24 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.795 19:07:24 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:39.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.795 --rc genhtml_branch_coverage=1 00:05:39.795 --rc genhtml_function_coverage=1 00:05:39.795 --rc genhtml_legend=1 00:05:39.795 --rc geninfo_all_blocks=1 00:05:39.795 --rc geninfo_unexecuted_blocks=1 00:05:39.795 00:05:39.795 ' 00:05:39.795 19:07:24 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:39.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.795 --rc genhtml_branch_coverage=1 00:05:39.795 --rc genhtml_function_coverage=1 00:05:39.795 --rc genhtml_legend=1 00:05:39.795 --rc geninfo_all_blocks=1 00:05:39.795 --rc geninfo_unexecuted_blocks=1 00:05:39.795 00:05:39.795 ' 00:05:39.795 19:07:24 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:39.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.795 --rc genhtml_branch_coverage=1 00:05:39.795 --rc genhtml_function_coverage=1 00:05:39.795 --rc genhtml_legend=1 00:05:39.795 --rc geninfo_all_blocks=1 00:05:39.795 --rc geninfo_unexecuted_blocks=1 00:05:39.795 00:05:39.795 ' 00:05:39.795 19:07:24 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:39.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.795 --rc genhtml_branch_coverage=1 00:05:39.795 --rc genhtml_function_coverage=1 00:05:39.795 --rc genhtml_legend=1 00:05:39.795 --rc geninfo_all_blocks=1 00:05:39.795 --rc geninfo_unexecuted_blocks=1 00:05:39.795 00:05:39.795 ' 00:05:39.795 19:07:24 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:39.795 19:07:24 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:39.795 19:07:24 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:39.795 19:07:24 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:39.795 19:07:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.795 19:07:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.795 19:07:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.795 ************************************ 00:05:39.795 START TEST default_locks 00:05:39.795 ************************************ 00:05:39.795 19:07:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:39.795 19:07:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60689 00:05:39.795 19:07:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60689 00:05:39.795 19:07:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.795 19:07:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60689 ']' 00:05:39.795 19:07:24 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.795 19:07:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.795 19:07:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.795 19:07:24 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.795 19:07:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.056 [2024-12-16 19:07:24.151619] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:40.056 [2024-12-16 19:07:24.151733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60689 ] 00:05:40.056 [2024-12-16 19:07:24.304945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.056 [2024-12-16 19:07:24.381912] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.628 19:07:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.628 19:07:24 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:40.628 19:07:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60689 00:05:40.629 19:07:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60689 00:05:40.629 19:07:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.885 19:07:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60689 00:05:40.885 19:07:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60689 ']' 00:05:40.885 19:07:25 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60689 00:05:40.885 19:07:25 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:40.885 19:07:25 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.885 19:07:25 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60689 00:05:40.885 killing process with pid 60689 00:05:40.885 19:07:25 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.885 19:07:25 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.885 19:07:25 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60689' 00:05:40.885 19:07:25 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60689 00:05:40.885 19:07:25 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60689 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60689 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60689 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60689 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60689 ']' 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.257 ERROR: process (pid: 60689) is no longer running 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.257 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60689) - No such process 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:42.257 00:05:42.257 real 0m2.226s 00:05:42.257 user 0m2.212s 00:05:42.257 sys 0m0.405s 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.257 ************************************ 00:05:42.257 19:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.257 END TEST default_locks 00:05:42.257 ************************************ 00:05:42.257 19:07:26 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:42.257 19:07:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.257 19:07:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.257 19:07:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.257 ************************************ 00:05:42.257 START TEST default_locks_via_rpc 00:05:42.257 ************************************ 00:05:42.257 19:07:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:42.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.257 19:07:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60742 00:05:42.257 19:07:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60742 00:05:42.257 19:07:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60742 ']' 00:05:42.257 19:07:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.257 19:07:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.257 19:07:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.257 19:07:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.257 19:07:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.257 19:07:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.257 [2024-12-16 19:07:26.422069] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:42.257 [2024-12-16 19:07:26.422305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60742 ] 00:05:42.257 [2024-12-16 19:07:26.569055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.514 [2024-12-16 19:07:26.645060] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.080 19:07:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.080 19:07:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:43.080 19:07:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:43.080 19:07:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.080 19:07:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.080 19:07:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.080 19:07:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:43.080 19:07:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:43.080 19:07:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:43.080 19:07:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:43.080 19:07:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:43.080 19:07:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.080 19:07:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.080 19:07:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.080 19:07:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60742 00:05:43.080 19:07:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60742 00:05:43.080 19:07:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.338 19:07:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60742 00:05:43.338 19:07:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60742 ']' 00:05:43.339 19:07:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60742 00:05:43.339 19:07:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:43.339 19:07:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.339 19:07:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60742 00:05:43.339 killing process with pid 60742 00:05:43.339 19:07:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.339 19:07:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.339 19:07:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60742' 00:05:43.339 19:07:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60742 00:05:43.339 19:07:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60742 00:05:44.713 ************************************ 00:05:44.713 END TEST default_locks_via_rpc 00:05:44.713 ************************************ 00:05:44.713 00:05:44.713 real 0m2.279s 00:05:44.713 user 0m2.300s 00:05:44.713 sys 0m0.419s 00:05:44.713 19:07:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.713 19:07:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.713 19:07:28 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:44.713 19:07:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.713 19:07:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.713 19:07:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.713 ************************************ 00:05:44.713 START TEST non_locking_app_on_locked_coremask 00:05:44.713 ************************************ 00:05:44.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.713 19:07:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:44.713 19:07:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60794 00:05:44.713 19:07:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60794 /var/tmp/spdk.sock 00:05:44.713 19:07:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60794 ']' 00:05:44.713 19:07:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.713 19:07:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.713 19:07:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.713 19:07:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.713 19:07:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.713 19:07:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.713 [2024-12-16 19:07:28.765653] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:44.713 [2024-12-16 19:07:28.765787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60794 ] 00:05:44.713 [2024-12-16 19:07:28.925554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.713 [2024-12-16 19:07:29.040362] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.280 19:07:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.280 19:07:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:45.280 19:07:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60810 00:05:45.280 19:07:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60810 /var/tmp/spdk2.sock 00:05:45.280 19:07:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60810 ']' 00:05:45.280 19:07:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.280 19:07:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.280 19:07:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:45.280 19:07:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.280 19:07:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.280 19:07:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.538 [2024-12-16 19:07:29.692498] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:45.538 [2024-12-16 19:07:29.692735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60810 ] 00:05:45.538 [2024-12-16 19:07:29.864328] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.538 [2024-12-16 19:07:29.864379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.796 [2024-12-16 19:07:30.062615] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.173 19:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.173 19:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:47.173 19:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60794 00:05:47.173 19:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60794 00:05:47.173 19:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.173 19:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60794 00:05:47.173 19:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60794 ']' 00:05:47.173 19:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60794 00:05:47.173 19:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:47.173 19:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.173 19:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60794 00:05:47.173 19:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.173 19:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.173 killing process with pid 60794 00:05:47.173 19:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60794' 00:05:47.173 19:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60794 00:05:47.173 19:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60794 00:05:49.701 19:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60810 00:05:49.701 19:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60810 ']' 00:05:49.701 19:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60810 00:05:49.701 19:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:49.701 19:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.701 19:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60810 00:05:49.701 killing process with pid 60810 00:05:49.701 19:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.701 19:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.701 19:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60810' 00:05:49.701 19:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60810 00:05:49.701 19:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60810 00:05:51.086 00:05:51.086 real 0m6.371s 00:05:51.086 user 0m6.563s 00:05:51.086 sys 0m0.807s 00:05:51.086 19:07:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.086 ************************************ 00:05:51.086 END TEST non_locking_app_on_locked_coremask 00:05:51.086 ************************************ 00:05:51.086 19:07:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.086 19:07:35 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:51.086 19:07:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.086 19:07:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.086 19:07:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.086 ************************************ 00:05:51.086 START TEST locking_app_on_unlocked_coremask 00:05:51.086 ************************************ 00:05:51.086 19:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:51.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.086 19:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60901 00:05:51.086 19:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60901 /var/tmp/spdk.sock 00:05:51.086 19:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60901 ']' 00:05:51.086 19:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.086 19:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.086 19:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.086 19:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.086 19:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.086 19:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:51.086 [2024-12-16 19:07:35.176776] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:51.086 [2024-12-16 19:07:35.176901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60901 ] 00:05:51.086 [2024-12-16 19:07:35.332735] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.086 [2024-12-16 19:07:35.332769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.086 [2024-12-16 19:07:35.411225] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.651 19:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.651 19:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:51.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.651 19:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60917 00:05:51.651 19:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60917 /var/tmp/spdk2.sock 00:05:51.651 19:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60917 ']' 00:05:51.651 19:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.651 19:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.651 19:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:51.651 19:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.651 19:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.651 19:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.909 [2024-12-16 19:07:36.077096] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:51.909 [2024-12-16 19:07:36.077435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60917 ] 00:05:51.909 [2024-12-16 19:07:36.243773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.167 [2024-12-16 19:07:36.404274] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.101 19:07:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.101 19:07:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:53.101 19:07:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60917 00:05:53.101 19:07:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60917 00:05:53.101 19:07:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.360 19:07:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60901 00:05:53.360 19:07:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60901 ']' 00:05:53.360 19:07:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60901 00:05:53.360 19:07:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:53.360 19:07:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.360 19:07:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60901 00:05:53.360 killing process with pid 60901 00:05:53.360 19:07:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.360 19:07:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.360 19:07:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60901' 00:05:53.360 19:07:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60901 00:05:53.360 19:07:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60901 00:05:55.888 19:07:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60917 00:05:55.888 19:07:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60917 ']' 00:05:55.888 19:07:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60917 00:05:55.888 19:07:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:55.888 19:07:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.888 19:07:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60917 00:05:55.888 killing process with pid 60917 00:05:55.888 19:07:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.888 19:07:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.888 19:07:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60917' 00:05:55.888 19:07:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60917 00:05:55.888 19:07:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60917 00:05:57.265 ************************************ 00:05:57.265 END TEST locking_app_on_unlocked_coremask 00:05:57.265 ************************************ 00:05:57.265 00:05:57.265 real 0m6.151s 00:05:57.265 user 0m6.418s 00:05:57.265 sys 0m0.831s 00:05:57.265 19:07:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.265 19:07:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.265 19:07:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:57.265 19:07:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.265 19:07:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.265 19:07:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.265 ************************************ 00:05:57.265 START TEST locking_app_on_locked_coremask 00:05:57.265 ************************************ 00:05:57.265 19:07:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:57.265 19:07:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61014 00:05:57.265 19:07:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61014 /var/tmp/spdk.sock 00:05:57.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.265 19:07:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61014 ']' 00:05:57.265 19:07:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.265 19:07:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.265 19:07:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.265 19:07:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.265 19:07:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.265 19:07:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.265 [2024-12-16 19:07:41.352465] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:57.265 [2024-12-16 19:07:41.352556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61014 ] 00:05:57.265 [2024-12-16 19:07:41.501057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.265 [2024-12-16 19:07:41.583045] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.199 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.199 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:58.199 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:58.199 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61024 00:05:58.199 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61024 /var/tmp/spdk2.sock 00:05:58.199 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:58.199 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61024 /var/tmp/spdk2.sock 00:05:58.199 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:58.199 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.199 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:58.199 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.199 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61024 /var/tmp/spdk2.sock 00:05:58.199 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61024 ']' 00:05:58.199 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.199 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.199 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.199 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.199 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.199 [2024-12-16 19:07:42.270100] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:58.199 [2024-12-16 19:07:42.270787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61024 ] 00:05:58.199 [2024-12-16 19:07:42.437751] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61014 has claimed it. 00:05:58.199 [2024-12-16 19:07:42.437798] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:58.780 ERROR: process (pid: 61024) is no longer running 00:05:58.780 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61024) - No such process 00:05:58.780 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.780 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:58.780 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:58.780 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:58.780 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:58.780 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:58.780 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61014 00:05:58.780 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61014 00:05:58.780 19:07:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.780 19:07:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61014 00:05:58.780 19:07:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61014 ']' 00:05:58.780 19:07:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61014 00:05:58.780 19:07:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:58.780 19:07:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.780 19:07:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61014 00:05:58.780 killing process with pid 61014 00:05:58.780 19:07:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.780 19:07:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.780 19:07:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61014' 00:05:58.780 19:07:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61014 00:05:58.780 19:07:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61014 00:06:00.167 ************************************ 00:06:00.167 END TEST locking_app_on_locked_coremask 00:06:00.167 ************************************ 00:06:00.167 00:06:00.167 real 0m2.976s 00:06:00.167 user 0m3.218s 00:06:00.167 sys 0m0.507s 00:06:00.167 19:07:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.167 19:07:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.167 19:07:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:00.167 19:07:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.167 19:07:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.167 19:07:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.167 ************************************ 00:06:00.167 START TEST locking_overlapped_coremask 00:06:00.167 ************************************ 00:06:00.167 19:07:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:00.167 19:07:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61082 00:06:00.167 19:07:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61082 /var/tmp/spdk.sock 00:06:00.167 19:07:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61082 ']' 00:06:00.167 19:07:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.167 19:07:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.167 19:07:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.167 19:07:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.167 19:07:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.167 19:07:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:00.167 [2024-12-16 19:07:44.421959] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:00.168 [2024-12-16 19:07:44.422133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61082 ] 00:06:00.426 [2024-12-16 19:07:44.594519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.426 [2024-12-16 19:07:44.680002] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.426 [2024-12-16 19:07:44.680558] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.426 [2024-12-16 19:07:44.680585] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.992 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.992 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:00.992 19:07:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61095 00:06:00.992 19:07:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61095 /var/tmp/spdk2.sock 00:06:00.992 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:00.992 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61095 /var/tmp/spdk2.sock 00:06:00.992 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:00.992 19:07:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:00.992 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.992 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:00.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.992 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.992 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61095 /var/tmp/spdk2.sock 00:06:00.992 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61095 ']' 00:06:00.992 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.992 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.992 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.992 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.992 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.251 [2024-12-16 19:07:45.347504] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:01.251 [2024-12-16 19:07:45.347612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61095 ] 00:06:01.251 [2024-12-16 19:07:45.521836] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61082 has claimed it. 00:06:01.251 [2024-12-16 19:07:45.525199] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:01.816 ERROR: process (pid: 61095) is no longer running 00:06:01.816 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61095) - No such process 00:06:01.816 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.816 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:01.816 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:01.816 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.816 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.816 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.816 19:07:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:01.816 19:07:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.816 19:07:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.816 19:07:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.816 19:07:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61082 00:06:01.816 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61082 ']' 00:06:01.816 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61082 00:06:01.816 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:01.816 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.816 19:07:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61082 00:06:01.816 killing process with pid 61082 00:06:01.816 19:07:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.816 19:07:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.816 19:07:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61082' 00:06:01.816 19:07:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61082 00:06:01.816 19:07:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61082 00:06:03.190 00:06:03.190 real 0m2.884s 00:06:03.190 user 0m7.814s 00:06:03.190 sys 0m0.433s 00:06:03.190 ************************************ 00:06:03.190 END TEST locking_overlapped_coremask 00:06:03.190 ************************************ 00:06:03.190 19:07:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.191 19:07:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.191 19:07:47 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:03.191 19:07:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.191 19:07:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.191 19:07:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.191 ************************************ 00:06:03.191 START TEST locking_overlapped_coremask_via_rpc 00:06:03.191 ************************************ 00:06:03.191 19:07:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:03.191 19:07:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61148 00:06:03.191 19:07:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61148 /var/tmp/spdk.sock 00:06:03.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.191 19:07:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61148 ']' 00:06:03.191 19:07:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:03.191 19:07:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.191 19:07:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.191 19:07:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.191 19:07:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.191 19:07:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.191 [2024-12-16 19:07:47.305258] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:03.191 [2024-12-16 19:07:47.305349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61148 ] 00:06:03.191 [2024-12-16 19:07:47.459413] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.191 [2024-12-16 19:07:47.459448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.449 [2024-12-16 19:07:47.543394] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.449 [2024-12-16 19:07:47.543780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.449 [2024-12-16 19:07:47.543796] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.016 19:07:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.016 19:07:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:04.016 19:07:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61166 00:06:04.016 19:07:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61166 /var/tmp/spdk2.sock 00:06:04.016 19:07:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:04.016 19:07:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61166 ']' 00:06:04.016 19:07:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.016 19:07:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.016 19:07:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.016 19:07:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.016 19:07:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.016 [2024-12-16 19:07:48.224206] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:04.016 [2024-12-16 19:07:48.224800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61166 ] 00:06:04.274 [2024-12-16 19:07:48.389141] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.274 [2024-12-16 19:07:48.389198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.275 [2024-12-16 19:07:48.597220] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.275 [2024-12-16 19:07:48.600409] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.275 [2024-12-16 19:07:48.600437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.651 [2024-12-16 19:07:49.659319] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61148 has claimed it. 00:06:05.651 request: 00:06:05.651 { 00:06:05.651 "method": "framework_enable_cpumask_locks", 00:06:05.651 "req_id": 1 00:06:05.651 } 00:06:05.651 Got JSON-RPC error response 00:06:05.651 response: 00:06:05.651 { 00:06:05.651 "code": -32603, 00:06:05.651 "message": "Failed to claim CPU core: 2" 00:06:05.651 } 00:06:05.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61148 /var/tmp/spdk.sock 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61148 ']' 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61166 /var/tmp/spdk2.sock 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61166 ']' 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.651 19:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.909 ************************************ 00:06:05.909 END TEST locking_overlapped_coremask_via_rpc 00:06:05.909 ************************************ 00:06:05.909 19:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.909 19:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.909 19:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:05.909 19:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:05.909 19:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:05.909 19:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:05.909 00:06:05.909 real 0m2.848s 00:06:05.909 user 0m1.065s 00:06:05.909 sys 0m0.136s 00:06:05.909 19:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.909 19:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.909 19:07:50 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:05.909 19:07:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61148 ]] 00:06:05.909 19:07:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61148 00:06:05.909 19:07:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61148 ']' 00:06:05.909 19:07:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61148 00:06:05.909 19:07:50 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:05.909 19:07:50 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.909 19:07:50 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61148 00:06:05.909 killing process with pid 61148 00:06:05.909 19:07:50 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.909 19:07:50 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.909 19:07:50 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61148' 00:06:05.909 19:07:50 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61148 00:06:05.909 19:07:50 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61148 00:06:07.338 19:07:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61166 ]] 00:06:07.338 19:07:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61166 00:06:07.338 19:07:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61166 ']' 00:06:07.338 19:07:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61166 00:06:07.338 19:07:51 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:07.338 19:07:51 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.338 19:07:51 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61166 00:06:07.338 killing process with pid 61166 00:06:07.338 19:07:51 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:07.338 19:07:51 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:07.338 19:07:51 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61166' 00:06:07.338 19:07:51 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61166 00:06:07.338 19:07:51 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61166 00:06:08.307 19:07:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:08.307 Process with pid 61148 is not found 00:06:08.308 19:07:52 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:08.308 19:07:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61148 ]] 00:06:08.308 19:07:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61148 00:06:08.308 19:07:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61148 ']' 00:06:08.308 19:07:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61148 00:06:08.308 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61148) - No such process 00:06:08.308 19:07:52 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61148 is not found' 00:06:08.308 19:07:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61166 ]] 00:06:08.308 Process with pid 61166 is not found 00:06:08.308 19:07:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61166 00:06:08.308 19:07:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61166 ']' 00:06:08.308 19:07:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61166 00:06:08.308 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61166) - No such process 00:06:08.308 19:07:52 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61166 is not found' 00:06:08.308 19:07:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:08.308 00:06:08.308 real 0m28.703s 00:06:08.308 user 0m49.629s 00:06:08.308 sys 0m4.370s 00:06:08.308 19:07:52 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.308 19:07:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.308 ************************************ 00:06:08.308 END TEST cpu_locks 00:06:08.308 ************************************ 00:06:08.308 ************************************ 00:06:08.308 END TEST event 00:06:08.308 ************************************ 00:06:08.308 00:06:08.308 real 0m53.662s 00:06:08.308 user 1m39.455s 00:06:08.308 sys 0m7.291s 00:06:08.308 19:07:52 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.308 19:07:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.569 19:07:52 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:08.569 19:07:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.569 19:07:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.569 19:07:52 -- common/autotest_common.sh@10 -- # set +x 00:06:08.569 ************************************ 00:06:08.569 START TEST thread 00:06:08.569 ************************************ 00:06:08.569 19:07:52 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:08.569 * Looking for test storage... 00:06:08.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:08.569 19:07:52 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:08.569 19:07:52 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:08.569 19:07:52 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:08.569 19:07:52 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:08.569 19:07:52 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.569 19:07:52 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.569 19:07:52 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.569 19:07:52 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.569 19:07:52 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.569 19:07:52 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.569 19:07:52 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.569 19:07:52 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.569 19:07:52 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.569 19:07:52 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.569 19:07:52 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.569 19:07:52 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:08.569 19:07:52 thread -- scripts/common.sh@345 -- # : 1 00:06:08.569 19:07:52 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.569 19:07:52 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.569 19:07:52 thread -- scripts/common.sh@365 -- # decimal 1 00:06:08.569 19:07:52 thread -- scripts/common.sh@353 -- # local d=1 00:06:08.569 19:07:52 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.569 19:07:52 thread -- scripts/common.sh@355 -- # echo 1 00:06:08.569 19:07:52 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.569 19:07:52 thread -- scripts/common.sh@366 -- # decimal 2 00:06:08.569 19:07:52 thread -- scripts/common.sh@353 -- # local d=2 00:06:08.569 19:07:52 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.569 19:07:52 thread -- scripts/common.sh@355 -- # echo 2 00:06:08.569 19:07:52 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.569 19:07:52 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.569 19:07:52 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.569 19:07:52 thread -- scripts/common.sh@368 -- # return 0 00:06:08.569 19:07:52 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.569 19:07:52 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:08.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.569 --rc genhtml_branch_coverage=1 00:06:08.569 --rc genhtml_function_coverage=1 00:06:08.569 --rc genhtml_legend=1 00:06:08.569 --rc geninfo_all_blocks=1 00:06:08.569 --rc geninfo_unexecuted_blocks=1 00:06:08.569 00:06:08.569 ' 00:06:08.569 19:07:52 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:08.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.569 --rc genhtml_branch_coverage=1 00:06:08.569 --rc genhtml_function_coverage=1 00:06:08.569 --rc genhtml_legend=1 00:06:08.569 --rc geninfo_all_blocks=1 00:06:08.569 --rc geninfo_unexecuted_blocks=1 00:06:08.569 00:06:08.569 ' 00:06:08.569 19:07:52 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:08.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.569 --rc genhtml_branch_coverage=1 00:06:08.569 --rc genhtml_function_coverage=1 00:06:08.569 --rc genhtml_legend=1 00:06:08.569 --rc geninfo_all_blocks=1 00:06:08.569 --rc geninfo_unexecuted_blocks=1 00:06:08.569 00:06:08.569 ' 00:06:08.570 19:07:52 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:08.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.570 --rc genhtml_branch_coverage=1 00:06:08.570 --rc genhtml_function_coverage=1 00:06:08.570 --rc genhtml_legend=1 00:06:08.570 --rc geninfo_all_blocks=1 00:06:08.570 --rc geninfo_unexecuted_blocks=1 00:06:08.570 00:06:08.570 ' 00:06:08.570 19:07:52 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:08.570 19:07:52 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:08.570 19:07:52 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.570 19:07:52 thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.570 ************************************ 00:06:08.570 START TEST thread_poller_perf 00:06:08.570 ************************************ 00:06:08.570 19:07:52 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:08.570 [2024-12-16 19:07:52.863829] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:08.570 [2024-12-16 19:07:52.864068] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61325 ] 00:06:08.831 [2024-12-16 19:07:53.025557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.831 [2024-12-16 19:07:53.126093] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.831 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:10.216 [2024-12-16T19:07:54.570Z] ====================================== 00:06:10.216 [2024-12-16T19:07:54.570Z] busy:2610321184 (cyc) 00:06:10.216 [2024-12-16T19:07:54.570Z] total_run_count: 297000 00:06:10.216 [2024-12-16T19:07:54.570Z] tsc_hz: 2600000000 (cyc) 00:06:10.216 [2024-12-16T19:07:54.570Z] ====================================== 00:06:10.216 [2024-12-16T19:07:54.570Z] poller_cost: 8788 (cyc), 3380 (nsec) 00:06:10.216 ************************************ 00:06:10.216 END TEST thread_poller_perf 00:06:10.216 ************************************ 00:06:10.216 00:06:10.216 real 0m1.450s 00:06:10.217 user 0m1.275s 00:06:10.217 sys 0m0.068s 00:06:10.217 19:07:54 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.217 19:07:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:10.217 19:07:54 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:10.217 19:07:54 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:10.217 19:07:54 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.217 19:07:54 thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.217 ************************************ 00:06:10.217 START TEST thread_poller_perf 00:06:10.217 ************************************ 00:06:10.217 19:07:54 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:10.217 [2024-12-16 19:07:54.352359] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:10.217 [2024-12-16 19:07:54.352610] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61357 ] 00:06:10.217 [2024-12-16 19:07:54.524278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.477 [2024-12-16 19:07:54.622224] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.477 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:11.418 [2024-12-16T19:07:55.772Z] ====================================== 00:06:11.418 [2024-12-16T19:07:55.772Z] busy:2603283558 (cyc) 00:06:11.418 [2024-12-16T19:07:55.772Z] total_run_count: 3642000 00:06:11.418 [2024-12-16T19:07:55.772Z] tsc_hz: 2600000000 (cyc) 00:06:11.418 [2024-12-16T19:07:55.772Z] ====================================== 00:06:11.418 [2024-12-16T19:07:55.772Z] poller_cost: 714 (cyc), 274 (nsec) 00:06:11.678 ************************************ 00:06:11.678 END TEST thread_poller_perf 00:06:11.678 ************************************ 00:06:11.678 00:06:11.678 real 0m1.452s 00:06:11.678 user 0m1.279s 00:06:11.678 sys 0m0.065s 00:06:11.678 19:07:55 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.678 19:07:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.678 19:07:55 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:11.678 ************************************ 00:06:11.678 END TEST thread 00:06:11.678 ************************************ 00:06:11.678 00:06:11.678 real 0m3.117s 00:06:11.678 user 0m2.653s 00:06:11.678 sys 0m0.248s 00:06:11.678 19:07:55 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.678 19:07:55 thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.678 19:07:55 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:11.678 19:07:55 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:11.678 19:07:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.678 19:07:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.678 19:07:55 -- common/autotest_common.sh@10 -- # set +x 00:06:11.678 ************************************ 00:06:11.678 START TEST app_cmdline 00:06:11.678 ************************************ 00:06:11.678 19:07:55 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:11.678 * Looking for test storage... 00:06:11.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:11.678 19:07:55 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:11.678 19:07:55 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:11.678 19:07:55 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:11.678 19:07:55 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.678 19:07:55 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:11.678 19:07:55 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.678 19:07:55 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:11.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.678 --rc genhtml_branch_coverage=1 00:06:11.678 --rc genhtml_function_coverage=1 00:06:11.678 --rc genhtml_legend=1 00:06:11.678 --rc geninfo_all_blocks=1 00:06:11.678 --rc geninfo_unexecuted_blocks=1 00:06:11.678 00:06:11.678 ' 00:06:11.678 19:07:55 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:11.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.678 --rc genhtml_branch_coverage=1 00:06:11.678 --rc genhtml_function_coverage=1 00:06:11.678 --rc genhtml_legend=1 00:06:11.678 --rc geninfo_all_blocks=1 00:06:11.678 --rc geninfo_unexecuted_blocks=1 00:06:11.678 00:06:11.678 ' 00:06:11.678 19:07:55 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:11.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.678 --rc genhtml_branch_coverage=1 00:06:11.678 --rc genhtml_function_coverage=1 00:06:11.678 --rc genhtml_legend=1 00:06:11.678 --rc geninfo_all_blocks=1 00:06:11.678 --rc geninfo_unexecuted_blocks=1 00:06:11.678 00:06:11.678 ' 00:06:11.678 19:07:55 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:11.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.678 --rc genhtml_branch_coverage=1 00:06:11.678 --rc genhtml_function_coverage=1 00:06:11.678 --rc genhtml_legend=1 00:06:11.678 --rc geninfo_all_blocks=1 00:06:11.678 --rc geninfo_unexecuted_blocks=1 00:06:11.678 00:06:11.678 ' 00:06:11.678 19:07:55 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:11.678 19:07:55 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61446 00:06:11.678 19:07:55 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:11.678 19:07:55 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61446 00:06:11.678 19:07:55 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61446 ']' 00:06:11.679 19:07:55 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.679 19:07:55 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.679 19:07:55 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.679 19:07:55 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.679 19:07:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:11.939 [2024-12-16 19:07:56.044286] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:11.939 [2024-12-16 19:07:56.044747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61446 ] 00:06:11.939 [2024-12-16 19:07:56.198023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.200 [2024-12-16 19:07:56.298124] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.772 19:07:56 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.772 19:07:56 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:12.772 19:07:56 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:12.772 { 00:06:12.772 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:06:12.772 "fields": { 00:06:12.772 "major": 25, 00:06:12.772 "minor": 1, 00:06:12.772 "patch": 0, 00:06:12.772 "suffix": "-pre", 00:06:12.772 "commit": "e01cb43b8" 00:06:12.772 } 00:06:12.772 } 00:06:12.772 19:07:57 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:12.772 19:07:57 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:12.772 19:07:57 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:12.772 19:07:57 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:12.772 19:07:57 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:12.772 19:07:57 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:12.772 19:07:57 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:12.772 19:07:57 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.772 19:07:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:12.772 19:07:57 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.772 19:07:57 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:12.772 19:07:57 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:12.772 19:07:57 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:12.772 19:07:57 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:12.772 19:07:57 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:12.772 19:07:57 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:13.033 19:07:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.033 19:07:57 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:13.033 19:07:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.033 19:07:57 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:13.033 19:07:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.033 19:07:57 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:13.033 19:07:57 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:13.033 19:07:57 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.033 request: 00:06:13.033 { 00:06:13.033 "method": "env_dpdk_get_mem_stats", 00:06:13.033 "req_id": 1 00:06:13.033 } 00:06:13.033 Got JSON-RPC error response 00:06:13.033 response: 00:06:13.033 { 00:06:13.033 "code": -32601, 00:06:13.033 "message": "Method not found" 00:06:13.033 } 00:06:13.033 19:07:57 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:13.033 19:07:57 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.033 19:07:57 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.033 19:07:57 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.033 19:07:57 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61446 00:06:13.033 19:07:57 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61446 ']' 00:06:13.033 19:07:57 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61446 00:06:13.033 19:07:57 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:13.033 19:07:57 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.033 19:07:57 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61446 00:06:13.033 killing process with pid 61446 00:06:13.033 19:07:57 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.033 19:07:57 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.033 19:07:57 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61446' 00:06:13.033 19:07:57 app_cmdline -- common/autotest_common.sh@973 -- # kill 61446 00:06:13.033 19:07:57 app_cmdline -- common/autotest_common.sh@978 -- # wait 61446 00:06:14.418 ************************************ 00:06:14.418 END TEST app_cmdline 00:06:14.418 ************************************ 00:06:14.418 00:06:14.418 real 0m2.845s 00:06:14.418 user 0m3.156s 00:06:14.418 sys 0m0.384s 00:06:14.418 19:07:58 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.418 19:07:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:14.418 19:07:58 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:14.418 19:07:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.418 19:07:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.418 19:07:58 -- common/autotest_common.sh@10 -- # set +x 00:06:14.418 ************************************ 00:06:14.418 START TEST version 00:06:14.418 ************************************ 00:06:14.418 19:07:58 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:14.680 * Looking for test storage... 00:06:14.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:14.680 19:07:58 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:14.680 19:07:58 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:14.680 19:07:58 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:14.680 19:07:58 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:14.680 19:07:58 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.680 19:07:58 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.680 19:07:58 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.680 19:07:58 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.680 19:07:58 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.680 19:07:58 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.680 19:07:58 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.680 19:07:58 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.680 19:07:58 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.680 19:07:58 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.680 19:07:58 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.680 19:07:58 version -- scripts/common.sh@344 -- # case "$op" in 00:06:14.680 19:07:58 version -- scripts/common.sh@345 -- # : 1 00:06:14.680 19:07:58 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.680 19:07:58 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.680 19:07:58 version -- scripts/common.sh@365 -- # decimal 1 00:06:14.680 19:07:58 version -- scripts/common.sh@353 -- # local d=1 00:06:14.680 19:07:58 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.680 19:07:58 version -- scripts/common.sh@355 -- # echo 1 00:06:14.680 19:07:58 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.680 19:07:58 version -- scripts/common.sh@366 -- # decimal 2 00:06:14.680 19:07:58 version -- scripts/common.sh@353 -- # local d=2 00:06:14.680 19:07:58 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.680 19:07:58 version -- scripts/common.sh@355 -- # echo 2 00:06:14.680 19:07:58 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.680 19:07:58 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.680 19:07:58 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.680 19:07:58 version -- scripts/common.sh@368 -- # return 0 00:06:14.680 19:07:58 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.680 19:07:58 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:14.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.680 --rc genhtml_branch_coverage=1 00:06:14.680 --rc genhtml_function_coverage=1 00:06:14.680 --rc genhtml_legend=1 00:06:14.680 --rc geninfo_all_blocks=1 00:06:14.680 --rc geninfo_unexecuted_blocks=1 00:06:14.680 00:06:14.680 ' 00:06:14.680 19:07:58 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:14.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.680 --rc genhtml_branch_coverage=1 00:06:14.680 --rc genhtml_function_coverage=1 00:06:14.680 --rc genhtml_legend=1 00:06:14.680 --rc geninfo_all_blocks=1 00:06:14.680 --rc geninfo_unexecuted_blocks=1 00:06:14.680 00:06:14.680 ' 00:06:14.680 19:07:58 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:14.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.680 --rc genhtml_branch_coverage=1 00:06:14.680 --rc genhtml_function_coverage=1 00:06:14.680 --rc genhtml_legend=1 00:06:14.680 --rc geninfo_all_blocks=1 00:06:14.680 --rc geninfo_unexecuted_blocks=1 00:06:14.680 00:06:14.680 ' 00:06:14.680 19:07:58 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:14.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.680 --rc genhtml_branch_coverage=1 00:06:14.680 --rc genhtml_function_coverage=1 00:06:14.680 --rc genhtml_legend=1 00:06:14.680 --rc geninfo_all_blocks=1 00:06:14.680 --rc geninfo_unexecuted_blocks=1 00:06:14.680 00:06:14.680 ' 00:06:14.680 19:07:58 version -- app/version.sh@17 -- # get_header_version major 00:06:14.680 19:07:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:14.680 19:07:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.680 19:07:58 version -- app/version.sh@14 -- # cut -f2 00:06:14.680 19:07:58 version -- app/version.sh@17 -- # major=25 00:06:14.680 19:07:58 version -- app/version.sh@18 -- # get_header_version minor 00:06:14.680 19:07:58 version -- app/version.sh@14 -- # cut -f2 00:06:14.680 19:07:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.680 19:07:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:14.680 19:07:58 version -- app/version.sh@18 -- # minor=1 00:06:14.680 19:07:58 version -- app/version.sh@19 -- # get_header_version patch 00:06:14.680 19:07:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:14.680 19:07:58 version -- app/version.sh@14 -- # cut -f2 00:06:14.680 19:07:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.680 19:07:58 version -- app/version.sh@19 -- # patch=0 00:06:14.680 19:07:58 version -- app/version.sh@20 -- # get_header_version suffix 00:06:14.680 19:07:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:14.680 19:07:58 version -- app/version.sh@14 -- # cut -f2 00:06:14.680 19:07:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.680 19:07:58 version -- app/version.sh@20 -- # suffix=-pre 00:06:14.680 19:07:58 version -- app/version.sh@22 -- # version=25.1 00:06:14.680 19:07:58 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:14.680 19:07:58 version -- app/version.sh@28 -- # version=25.1rc0 00:06:14.680 19:07:58 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:14.680 19:07:58 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:14.680 19:07:58 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:14.680 19:07:58 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:14.680 ************************************ 00:06:14.680 END TEST version 00:06:14.680 ************************************ 00:06:14.680 00:06:14.681 real 0m0.180s 00:06:14.681 user 0m0.115s 00:06:14.681 sys 0m0.094s 00:06:14.681 19:07:58 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.681 19:07:58 version -- common/autotest_common.sh@10 -- # set +x 00:06:14.681 19:07:58 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:14.681 19:07:58 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:14.681 19:07:58 -- spdk/autotest.sh@194 -- # uname -s 00:06:14.681 19:07:58 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:14.681 19:07:58 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:14.681 19:07:58 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:14.681 19:07:58 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:14.681 19:07:58 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:14.681 19:07:58 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:14.681 19:07:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.681 19:07:58 -- common/autotest_common.sh@10 -- # set +x 00:06:14.681 ************************************ 00:06:14.681 START TEST blockdev_nvme 00:06:14.681 ************************************ 00:06:14.681 19:07:58 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:14.681 * Looking for test storage... 00:06:14.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:14.681 19:07:59 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:14.681 19:07:59 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:06:14.681 19:07:59 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:14.942 19:07:59 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.942 19:07:59 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:14.942 19:07:59 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.942 19:07:59 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:14.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.942 --rc genhtml_branch_coverage=1 00:06:14.942 --rc genhtml_function_coverage=1 00:06:14.942 --rc genhtml_legend=1 00:06:14.942 --rc geninfo_all_blocks=1 00:06:14.942 --rc geninfo_unexecuted_blocks=1 00:06:14.942 00:06:14.942 ' 00:06:14.942 19:07:59 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:14.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.942 --rc genhtml_branch_coverage=1 00:06:14.942 --rc genhtml_function_coverage=1 00:06:14.942 --rc genhtml_legend=1 00:06:14.942 --rc geninfo_all_blocks=1 00:06:14.942 --rc geninfo_unexecuted_blocks=1 00:06:14.942 00:06:14.942 ' 00:06:14.942 19:07:59 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:14.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.942 --rc genhtml_branch_coverage=1 00:06:14.942 --rc genhtml_function_coverage=1 00:06:14.942 --rc genhtml_legend=1 00:06:14.942 --rc geninfo_all_blocks=1 00:06:14.942 --rc geninfo_unexecuted_blocks=1 00:06:14.942 00:06:14.942 ' 00:06:14.942 19:07:59 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:14.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.942 --rc genhtml_branch_coverage=1 00:06:14.943 --rc genhtml_function_coverage=1 00:06:14.943 --rc genhtml_legend=1 00:06:14.943 --rc geninfo_all_blocks=1 00:06:14.943 --rc geninfo_unexecuted_blocks=1 00:06:14.943 00:06:14.943 ' 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:14.943 19:07:59 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:06:14.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61618 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61618 00:06:14.943 19:07:59 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61618 ']' 00:06:14.943 19:07:59 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.943 19:07:59 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.943 19:07:59 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.943 19:07:59 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.943 19:07:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:14.943 19:07:59 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:14.943 [2024-12-16 19:07:59.213254] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:14.943 [2024-12-16 19:07:59.214123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61618 ] 00:06:15.203 [2024-12-16 19:07:59.390192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.203 [2024-12-16 19:07:59.507756] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.148 19:08:00 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.148 19:08:00 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:16.148 19:08:00 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:16.148 19:08:00 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:06:16.148 19:08:00 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:16.148 19:08:00 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:16.148 19:08:00 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:16.148 19:08:00 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:16.148 19:08:00 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.148 19:08:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:16.410 19:08:00 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.410 19:08:00 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:16.410 19:08:00 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.410 19:08:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:16.410 19:08:00 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.410 19:08:00 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:06:16.410 19:08:00 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:16.410 19:08:00 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.410 19:08:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:16.410 19:08:00 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.410 19:08:00 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:16.410 19:08:00 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.410 19:08:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:16.410 19:08:00 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.410 19:08:00 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:16.410 19:08:00 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.410 19:08:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:16.410 19:08:00 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.410 19:08:00 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:16.410 19:08:00 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:16.410 19:08:00 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:16.410 19:08:00 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.410 19:08:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:16.410 19:08:00 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.410 19:08:00 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:16.410 19:08:00 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:16.411 19:08:00 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "0ab92d36-43be-4bfc-82d6-76b5ec805c8f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0ab92d36-43be-4bfc-82d6-76b5ec805c8f",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "7146e55d-3424-4a88-a6e0-04629215cd13"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "7146e55d-3424-4a88-a6e0-04629215cd13",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "7225e289-d5c3-43a5-955d-4db587daf3d5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7225e289-d5c3-43a5-955d-4db587daf3d5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "eb95d9f9-a0c8-4c5b-9e4e-7a9b6a307766"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "eb95d9f9-a0c8-4c5b-9e4e-7a9b6a307766",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "798503a4-02d6-4dcb-ab69-da60771af854"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "798503a4-02d6-4dcb-ab69-da60771af854",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "9e576ecf-18b5-4c22-a215-f0b337544fab"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9e576ecf-18b5-4c22-a215-f0b337544fab",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:16.411 19:08:00 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:16.411 19:08:00 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:16.411 19:08:00 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:16.411 19:08:00 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61618 00:06:16.411 19:08:00 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61618 ']' 00:06:16.411 19:08:00 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61618 00:06:16.411 19:08:00 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:16.411 19:08:00 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.411 19:08:00 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61618 00:06:16.411 killing process with pid 61618 00:06:16.411 19:08:00 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.411 19:08:00 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.411 19:08:00 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61618' 00:06:16.411 19:08:00 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61618 00:06:16.411 19:08:00 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61618 00:06:18.325 19:08:02 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:18.325 19:08:02 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:18.325 19:08:02 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:18.325 19:08:02 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.325 19:08:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:18.325 ************************************ 00:06:18.325 START TEST bdev_hello_world 00:06:18.325 ************************************ 00:06:18.325 19:08:02 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:18.325 [2024-12-16 19:08:02.311068] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:18.325 [2024-12-16 19:08:02.311203] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61702 ] 00:06:18.325 [2024-12-16 19:08:02.470849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.325 [2024-12-16 19:08:02.569358] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.898 [2024-12-16 19:08:03.115821] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:18.898 [2024-12-16 19:08:03.115871] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:18.898 [2024-12-16 19:08:03.115889] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:18.898 [2024-12-16 19:08:03.118339] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:18.898 [2024-12-16 19:08:03.118740] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:18.898 [2024-12-16 19:08:03.118761] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:18.898 [2024-12-16 19:08:03.119111] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:18.898 00:06:18.898 [2024-12-16 19:08:03.119241] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:19.837 00:06:19.837 real 0m1.600s 00:06:19.837 user 0m1.316s 00:06:19.837 sys 0m0.177s 00:06:19.837 19:08:03 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.837 ************************************ 00:06:19.837 END TEST bdev_hello_world 00:06:19.837 19:08:03 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:19.837 ************************************ 00:06:19.837 19:08:03 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:19.837 19:08:03 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:19.838 19:08:03 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.838 19:08:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:19.838 ************************************ 00:06:19.838 START TEST bdev_bounds 00:06:19.838 ************************************ 00:06:19.838 19:08:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:19.838 19:08:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61739 00:06:19.838 Process bdevio pid: 61739 00:06:19.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.838 19:08:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.838 19:08:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61739' 00:06:19.838 19:08:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61739 00:06:19.838 19:08:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61739 ']' 00:06:19.838 19:08:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:19.838 19:08:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.838 19:08:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.838 19:08:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.838 19:08:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.838 19:08:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:19.838 [2024-12-16 19:08:03.952203] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:19.838 [2024-12-16 19:08:03.952460] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61739 ] 00:06:19.838 [2024-12-16 19:08:04.104493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.095 [2024-12-16 19:08:04.207009] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.095 [2024-12-16 19:08:04.207304] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.095 [2024-12-16 19:08:04.207344] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.661 19:08:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.662 19:08:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:20.662 19:08:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:20.662 I/O targets: 00:06:20.662 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:20.662 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:20.662 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:20.662 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:20.662 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:20.662 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:20.662 00:06:20.662 00:06:20.662 CUnit - A unit testing framework for C - Version 2.1-3 00:06:20.662 http://cunit.sourceforge.net/ 00:06:20.662 00:06:20.662 00:06:20.662 Suite: bdevio tests on: Nvme3n1 00:06:20.662 Test: blockdev write read block ...passed 00:06:20.662 Test: blockdev write zeroes read block ...passed 00:06:20.662 Test: blockdev write zeroes read no split ...passed 00:06:20.662 Test: blockdev write zeroes read split ...passed 00:06:20.662 Test: blockdev write zeroes read split partial ...passed 00:06:20.662 Test: blockdev reset ...[2024-12-16 19:08:04.924758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:20.662 [2024-12-16 19:08:04.927776] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:20.662 passed 00:06:20.662 Test: blockdev write read 8 blocks ...passed 00:06:20.662 Test: blockdev write read size > 128k ...passed 00:06:20.662 Test: blockdev write read invalid size ...passed 00:06:20.662 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:20.662 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:20.662 Test: blockdev write read max offset ...passed 00:06:20.662 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:20.662 Test: blockdev writev readv 8 blocks ...passed 00:06:20.662 Test: blockdev writev readv 30 x 1block ...passed 00:06:20.662 Test: blockdev writev readv block ...passed 00:06:20.662 Test: blockdev writev readv size > 128k ...passed 00:06:20.662 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:20.662 Test: blockdev comparev and writev ...[2024-12-16 19:08:04.935138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7e0a000 len:0x1000 00:06:20.662 [2024-12-16 19:08:04.935213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:20.662 passed 00:06:20.662 Test: blockdev nvme passthru rw ...passed 00:06:20.662 Test: blockdev nvme passthru vendor specific ...passed 00:06:20.662 Test: blockdev nvme admin passthru ...[2024-12-16 19:08:04.936060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:20.662 [2024-12-16 19:08:04.936096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:20.662 passed 00:06:20.662 Test: blockdev copy ...passed 00:06:20.662 Suite: bdevio tests on: Nvme2n3 00:06:20.662 Test: blockdev write read block ...passed 00:06:20.662 Test: blockdev write zeroes read block ...passed 00:06:20.662 Test: blockdev write zeroes read no split ...passed 00:06:20.662 Test: blockdev write zeroes read split ...passed 00:06:20.662 Test: blockdev write zeroes read split partial ...passed 00:06:20.662 Test: blockdev reset ...[2024-12-16 19:08:04.993813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:20.662 [2024-12-16 19:08:04.997284] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:20.662 passed 00:06:20.662 Test: blockdev write read 8 blocks ...passed 00:06:20.662 Test: blockdev write read size > 128k ...passed 00:06:20.662 Test: blockdev write read invalid size ...passed 00:06:20.662 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:20.662 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:20.662 Test: blockdev write read max offset ...passed 00:06:20.662 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:20.662 Test: blockdev writev readv 8 blocks ...passed 00:06:20.662 Test: blockdev writev readv 30 x 1block ...passed 00:06:20.662 Test: blockdev writev readv block ...passed 00:06:20.662 Test: blockdev writev readv size > 128k ...passed 00:06:20.662 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:20.662 Test: blockdev comparev and writev ...[2024-12-16 19:08:05.005148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29a806000 len:0x1000 00:06:20.662 [2024-12-16 19:08:05.005359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:20.662 passed 00:06:20.662 Test: blockdev nvme passthru rw ...passed 00:06:20.662 Test: blockdev nvme passthru vendor specific ...[2024-12-16 19:08:05.006558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:20.662 [2024-12-16 19:08:05.006685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:20.662 passed 00:06:20.662 Test: blockdev nvme admin passthru ...passed 00:06:20.920 Test: blockdev copy ...passed 00:06:20.920 Suite: bdevio tests on: Nvme2n2 00:06:20.920 Test: blockdev write read block ...passed 00:06:20.920 Test: blockdev write zeroes read block ...passed 00:06:20.920 Test: blockdev write zeroes read no split ...passed 00:06:20.920 Test: blockdev write zeroes read split ...passed 00:06:20.920 Test: blockdev write zeroes read split partial ...passed 00:06:20.920 Test: blockdev reset ...[2024-12-16 19:08:05.062517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:20.920 [2024-12-16 19:08:05.065664] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:20.920 passed 00:06:20.920 Test: blockdev write read 8 blocks ...passed 00:06:20.920 Test: blockdev write read size > 128k ...passed 00:06:20.920 Test: blockdev write read invalid size ...passed 00:06:20.920 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:20.920 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:20.920 Test: blockdev write read max offset ...passed 00:06:20.920 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:20.920 Test: blockdev writev readv 8 blocks ...passed 00:06:20.920 Test: blockdev writev readv 30 x 1block ...passed 00:06:20.920 Test: blockdev writev readv block ...passed 00:06:20.920 Test: blockdev writev readv size > 128k ...passed 00:06:20.920 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:20.920 Test: blockdev comparev and writev ...[2024-12-16 19:08:05.074700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d463c000 len:0x1000 00:06:20.920 [2024-12-16 19:08:05.074829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:20.920 passed 00:06:20.920 Test: blockdev nvme passthru rw ...passed 00:06:20.920 Test: blockdev nvme passthru vendor specific ...[2024-12-16 19:08:05.075964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:06:20.920 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:06:20.920 [2024-12-16 19:08:05.076333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:20.920 passed 00:06:20.920 Test: blockdev copy ...passed 00:06:20.920 Suite: bdevio tests on: Nvme2n1 00:06:20.920 Test: blockdev write read block ...passed 00:06:20.920 Test: blockdev write zeroes read block ...passed 00:06:20.920 Test: blockdev write zeroes read no split ...passed 00:06:20.920 Test: blockdev write zeroes read split ...passed 00:06:20.920 Test: blockdev write zeroes read split partial ...passed 00:06:20.920 Test: blockdev reset ...[2024-12-16 19:08:05.130970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:20.920 [2024-12-16 19:08:05.134210] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:06:20.920 00:06:20.920 Test: blockdev write read 8 blocks ...passed 00:06:20.920 Test: blockdev write read size > 128k ...passed 00:06:20.920 Test: blockdev write read invalid size ...passed 00:06:20.920 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:20.920 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:20.920 Test: blockdev write read max offset ...passed 00:06:20.920 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:20.920 Test: blockdev writev readv 8 blocks ...passed 00:06:20.920 Test: blockdev writev readv 30 x 1block ...passed 00:06:20.920 Test: blockdev writev readv block ...passed 00:06:20.920 Test: blockdev writev readv size > 128k ...passed 00:06:20.920 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:20.920 Test: blockdev comparev and writev ...[2024-12-16 19:08:05.141994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4638000 len:0x1000 00:06:20.920 [2024-12-16 19:08:05.142165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:20.920 passed 00:06:20.920 Test: blockdev nvme passthru rw ...passed 00:06:20.920 Test: blockdev nvme passthru vendor specific ...[2024-12-16 19:08:05.143861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:20.920 [2024-12-16 19:08:05.144288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:20.920 passed 00:06:20.920 Test: blockdev nvme admin passthru ...passed 00:06:20.920 Test: blockdev copy ...passed 00:06:20.920 Suite: bdevio tests on: Nvme1n1 00:06:20.920 Test: blockdev write read block ...passed 00:06:20.920 Test: blockdev write zeroes read block ...passed 00:06:20.920 Test: blockdev write zeroes read no split ...passed 00:06:20.920 Test: blockdev write zeroes read split ...passed 00:06:20.920 Test: blockdev write zeroes read split partial ...passed 00:06:20.920 Test: blockdev reset ...[2024-12-16 19:08:05.198315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:20.920 [2024-12-16 19:08:05.200989] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:20.920 passed 00:06:20.920 Test: blockdev write read 8 blocks ...passed 00:06:20.920 Test: blockdev write read size > 128k ...passed 00:06:20.920 Test: blockdev write read invalid size ...passed 00:06:20.920 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:20.920 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:20.920 Test: blockdev write read max offset ...passed 00:06:20.920 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:20.920 Test: blockdev writev readv 8 blocks ...passed 00:06:20.920 Test: blockdev writev readv 30 x 1block ...passed 00:06:20.920 Test: blockdev writev readv block ...passed 00:06:20.920 Test: blockdev writev readv size > 128k ...passed 00:06:20.920 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:20.920 Test: blockdev comparev and writev ...[2024-12-16 19:08:05.208658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4634000 len:0x1000 00:06:20.920 [2024-12-16 19:08:05.208705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:20.920 passed 00:06:20.920 Test: blockdev nvme passthru rw ...passed 00:06:20.920 Test: blockdev nvme passthru vendor specific ...passed 00:06:20.920 Test: blockdev nvme admin passthru ...[2024-12-16 19:08:05.209607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:20.920 [2024-12-16 19:08:05.209637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:20.920 passed 00:06:20.920 Test: blockdev copy ...passed 00:06:20.920 Suite: bdevio tests on: Nvme0n1 00:06:20.920 Test: blockdev write read block ...passed 00:06:20.920 Test: blockdev write zeroes read block ...passed 00:06:20.920 Test: blockdev write zeroes read no split ...passed 00:06:20.920 Test: blockdev write zeroes read split ...passed 00:06:20.920 Test: blockdev write zeroes read split partial ...passed 00:06:20.920 Test: blockdev reset ...[2024-12-16 19:08:05.265700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:20.920 [2024-12-16 19:08:05.268494] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:20.920 passed 00:06:20.920 Test: blockdev write read 8 blocks ...passed 00:06:20.920 Test: blockdev write read size > 128k ...passed 00:06:20.920 Test: blockdev write read invalid size ...passed 00:06:20.920 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:20.920 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:20.920 Test: blockdev write read max offset ...passed 00:06:20.920 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:20.920 Test: blockdev writev readv 8 blocks ...passed 00:06:20.920 Test: blockdev writev readv 30 x 1block ...passed 00:06:21.180 Test: blockdev writev readv block ...passed 00:06:21.180 Test: blockdev writev readv size > 128k ...passed 00:06:21.180 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:21.180 Test: blockdev comparev and writev ...passed 00:06:21.180 Test: blockdev nvme passthru rw ...[2024-12-16 19:08:05.274840] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:21.180 separate metadata which is not supported yet. 00:06:21.180 passed 00:06:21.180 Test: blockdev nvme passthru vendor specific ...passed 00:06:21.180 Test: blockdev nvme admin passthru ...[2024-12-16 19:08:05.275482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:21.180 [2024-12-16 19:08:05.275530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:21.180 passed 00:06:21.180 Test: blockdev copy ...passed 00:06:21.180 00:06:21.180 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.180 suites 6 6 n/a 0 0 00:06:21.180 tests 138 138 138 0 0 00:06:21.180 asserts 893 893 893 0 n/a 00:06:21.180 00:06:21.180 Elapsed time = 1.033 seconds 00:06:21.180 0 00:06:21.180 19:08:05 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61739 00:06:21.180 19:08:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61739 ']' 00:06:21.180 19:08:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61739 00:06:21.180 19:08:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:21.180 19:08:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.180 19:08:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61739 00:06:21.180 19:08:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.180 19:08:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.180 19:08:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61739' 00:06:21.180 killing process with pid 61739 00:06:21.180 19:08:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61739 00:06:21.180 19:08:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61739 00:06:21.749 19:08:05 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:21.749 00:06:21.749 real 0m2.094s 00:06:21.749 user 0m5.338s 00:06:21.749 sys 0m0.276s 00:06:21.749 19:08:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.749 19:08:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:21.749 ************************************ 00:06:21.749 END TEST bdev_bounds 00:06:21.749 ************************************ 00:06:21.749 19:08:06 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:21.749 19:08:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:21.749 19:08:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.749 19:08:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:21.749 ************************************ 00:06:21.749 START TEST bdev_nbd 00:06:21.749 ************************************ 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61793 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61793 /var/tmp/spdk-nbd.sock 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61793 ']' 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.749 19:08:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:21.749 [2024-12-16 19:08:06.093104] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:21.749 [2024-12-16 19:08:06.093233] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:22.010 [2024-12-16 19:08:06.253523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.010 [2024-12-16 19:08:06.355951] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.952 19:08:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.952 19:08:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:22.952 19:08:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:22.952 19:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.952 19:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:22.952 19:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:22.952 19:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:22.952 19:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.952 19:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:22.952 19:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:22.952 19:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:22.952 19:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:22.952 19:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:22.952 19:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:22.952 19:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:22.952 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:22.952 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:22.952 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:22.952 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:22.952 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:22.952 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.952 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.953 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:22.953 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:22.953 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.953 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.953 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.953 1+0 records in 00:06:22.953 1+0 records out 00:06:22.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311516 s, 13.1 MB/s 00:06:22.953 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.953 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:22.953 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.953 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.953 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:22.953 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:22.953 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:22.953 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:23.213 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:23.213 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:23.213 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:23.213 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:23.213 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:23.213 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.213 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.213 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:23.213 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:23.213 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.213 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.213 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:23.213 1+0 records in 00:06:23.213 1+0 records out 00:06:23.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330155 s, 12.4 MB/s 00:06:23.213 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.213 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:23.213 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.213 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.213 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:23.213 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:23.214 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:23.214 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:23.475 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:23.475 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:23.475 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:23.475 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:23.475 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:23.475 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.475 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.475 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:23.475 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:23.475 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.475 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.475 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:23.475 1+0 records in 00:06:23.475 1+0 records out 00:06:23.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368586 s, 11.1 MB/s 00:06:23.475 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.475 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:23.475 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.475 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.475 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:23.475 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:23.475 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:23.475 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:23.783 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:23.783 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:23.783 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:23.783 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:23.783 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:23.783 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.783 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.783 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:23.783 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:23.783 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.783 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.783 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:23.783 1+0 records in 00:06:23.783 1+0 records out 00:06:23.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376093 s, 10.9 MB/s 00:06:23.783 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.783 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:23.783 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.783 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.783 19:08:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:23.783 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:23.783 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:23.783 19:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:23.783 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:23.783 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:23.783 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:23.783 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:23.783 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:23.783 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.783 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.783 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:23.783 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:23.783 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.783 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.783 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:23.783 1+0 records in 00:06:23.783 1+0 records out 00:06:23.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318837 s, 12.8 MB/s 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:24.065 1+0 records in 00:06:24.065 1+0 records out 00:06:24.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00166263 s, 2.5 MB/s 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:24.065 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.332 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:24.332 { 00:06:24.332 "nbd_device": "/dev/nbd0", 00:06:24.332 "bdev_name": "Nvme0n1" 00:06:24.332 }, 00:06:24.332 { 00:06:24.332 "nbd_device": "/dev/nbd1", 00:06:24.332 "bdev_name": "Nvme1n1" 00:06:24.332 }, 00:06:24.332 { 00:06:24.332 "nbd_device": "/dev/nbd2", 00:06:24.332 "bdev_name": "Nvme2n1" 00:06:24.332 }, 00:06:24.332 { 00:06:24.332 "nbd_device": "/dev/nbd3", 00:06:24.332 "bdev_name": "Nvme2n2" 00:06:24.332 }, 00:06:24.332 { 00:06:24.332 "nbd_device": "/dev/nbd4", 00:06:24.332 "bdev_name": "Nvme2n3" 00:06:24.332 }, 00:06:24.332 { 00:06:24.332 "nbd_device": "/dev/nbd5", 00:06:24.332 "bdev_name": "Nvme3n1" 00:06:24.332 } 00:06:24.332 ]' 00:06:24.332 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:24.332 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:24.332 { 00:06:24.332 "nbd_device": "/dev/nbd0", 00:06:24.332 "bdev_name": "Nvme0n1" 00:06:24.332 }, 00:06:24.332 { 00:06:24.332 "nbd_device": "/dev/nbd1", 00:06:24.332 "bdev_name": "Nvme1n1" 00:06:24.332 }, 00:06:24.332 { 00:06:24.332 "nbd_device": "/dev/nbd2", 00:06:24.332 "bdev_name": "Nvme2n1" 00:06:24.332 }, 00:06:24.332 { 00:06:24.332 "nbd_device": "/dev/nbd3", 00:06:24.332 "bdev_name": "Nvme2n2" 00:06:24.332 }, 00:06:24.332 { 00:06:24.332 "nbd_device": "/dev/nbd4", 00:06:24.332 "bdev_name": "Nvme2n3" 00:06:24.332 }, 00:06:24.332 { 00:06:24.332 "nbd_device": "/dev/nbd5", 00:06:24.332 "bdev_name": "Nvme3n1" 00:06:24.332 } 00:06:24.332 ]' 00:06:24.332 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:24.332 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:24.332 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.332 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:24.332 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.332 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:24.332 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.332 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.592 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.592 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.592 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.592 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.592 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.592 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.592 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.592 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.592 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.592 19:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.850 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.850 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.850 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.850 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.850 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.850 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.850 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.850 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.850 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.850 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:25.107 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:25.107 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:25.107 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:25.107 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.107 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.107 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:25.107 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:25.107 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.107 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.107 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:25.107 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:25.107 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:25.107 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:25.107 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.107 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.107 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:25.107 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:25.107 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.107 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.107 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:25.366 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:25.366 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:25.366 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:25.366 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.366 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.366 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:25.366 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:25.366 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.366 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.366 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:25.626 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:25.626 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:25.626 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:25.626 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.626 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.626 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:25.626 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:25.626 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.626 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.626 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.626 19:08:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:25.888 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:26.149 /dev/nbd0 00:06:26.149 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.149 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.149 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:26.149 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:26.149 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.149 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.150 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:26.150 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:26.150 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.150 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.150 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:26.150 1+0 records in 00:06:26.150 1+0 records out 00:06:26.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394605 s, 10.4 MB/s 00:06:26.150 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.150 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:26.150 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.150 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.150 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:26.150 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.150 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:26.150 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:26.408 /dev/nbd1 00:06:26.408 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.408 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.408 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:26.408 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:26.408 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.408 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.408 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:26.408 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:26.408 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.408 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.408 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:26.408 1+0 records in 00:06:26.408 1+0 records out 00:06:26.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595292 s, 6.9 MB/s 00:06:26.408 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.408 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:26.408 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.408 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.408 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:26.408 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.408 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:26.408 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:26.408 /dev/nbd10 00:06:26.408 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:26.666 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:26.666 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:26.666 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:26.666 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.666 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.666 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:26.666 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:26.666 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.666 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.666 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:26.666 1+0 records in 00:06:26.666 1+0 records out 00:06:26.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435174 s, 9.4 MB/s 00:06:26.667 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.667 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:26.667 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.667 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.667 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:26.667 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.667 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:26.667 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:26.667 /dev/nbd11 00:06:26.667 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:26.667 19:08:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:26.667 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:26.667 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:26.667 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.667 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.667 19:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:26.667 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:26.667 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.667 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.667 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:26.667 1+0 records in 00:06:26.667 1+0 records out 00:06:26.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415658 s, 9.9 MB/s 00:06:26.667 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.667 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:26.667 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.667 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.667 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:26.667 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.667 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:26.667 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:26.928 /dev/nbd12 00:06:26.928 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:26.928 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:26.928 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:26.928 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:26.928 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.928 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.928 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:26.928 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:26.928 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.928 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.928 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:26.928 1+0 records in 00:06:26.928 1+0 records out 00:06:26.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566937 s, 7.2 MB/s 00:06:26.928 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.928 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:26.928 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.928 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.928 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:26.928 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.928 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:26.928 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:27.189 /dev/nbd13 00:06:27.189 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:27.189 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:27.189 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:27.189 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:27.189 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:27.189 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:27.189 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:27.189 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:27.189 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:27.189 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:27.189 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:27.189 1+0 records in 00:06:27.189 1+0 records out 00:06:27.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000678123 s, 6.0 MB/s 00:06:27.189 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.189 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:27.189 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.189 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:27.189 19:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:27.189 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.189 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:27.189 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.189 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.189 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.448 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.448 { 00:06:27.448 "nbd_device": "/dev/nbd0", 00:06:27.448 "bdev_name": "Nvme0n1" 00:06:27.448 }, 00:06:27.448 { 00:06:27.448 "nbd_device": "/dev/nbd1", 00:06:27.448 "bdev_name": "Nvme1n1" 00:06:27.448 }, 00:06:27.448 { 00:06:27.448 "nbd_device": "/dev/nbd10", 00:06:27.448 "bdev_name": "Nvme2n1" 00:06:27.448 }, 00:06:27.448 { 00:06:27.448 "nbd_device": "/dev/nbd11", 00:06:27.448 "bdev_name": "Nvme2n2" 00:06:27.448 }, 00:06:27.448 { 00:06:27.448 "nbd_device": "/dev/nbd12", 00:06:27.448 "bdev_name": "Nvme2n3" 00:06:27.448 }, 00:06:27.448 { 00:06:27.448 "nbd_device": "/dev/nbd13", 00:06:27.448 "bdev_name": "Nvme3n1" 00:06:27.448 } 00:06:27.448 ]' 00:06:27.448 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.448 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.448 { 00:06:27.448 "nbd_device": "/dev/nbd0", 00:06:27.448 "bdev_name": "Nvme0n1" 00:06:27.448 }, 00:06:27.448 { 00:06:27.448 "nbd_device": "/dev/nbd1", 00:06:27.448 "bdev_name": "Nvme1n1" 00:06:27.448 }, 00:06:27.448 { 00:06:27.448 "nbd_device": "/dev/nbd10", 00:06:27.448 "bdev_name": "Nvme2n1" 00:06:27.448 }, 00:06:27.448 { 00:06:27.448 "nbd_device": "/dev/nbd11", 00:06:27.448 "bdev_name": "Nvme2n2" 00:06:27.448 }, 00:06:27.448 { 00:06:27.448 "nbd_device": "/dev/nbd12", 00:06:27.448 "bdev_name": "Nvme2n3" 00:06:27.448 }, 00:06:27.448 { 00:06:27.448 "nbd_device": "/dev/nbd13", 00:06:27.448 "bdev_name": "Nvme3n1" 00:06:27.448 } 00:06:27.448 ]' 00:06:27.448 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.448 /dev/nbd1 00:06:27.448 /dev/nbd10 00:06:27.448 /dev/nbd11 00:06:27.448 /dev/nbd12 00:06:27.448 /dev/nbd13' 00:06:27.448 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.448 /dev/nbd1 00:06:27.448 /dev/nbd10 00:06:27.448 /dev/nbd11 00:06:27.448 /dev/nbd12 00:06:27.448 /dev/nbd13' 00:06:27.448 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.448 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:27.448 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:27.448 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:27.448 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:27.448 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:27.448 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:27.448 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.448 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.448 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:27.448 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.448 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:27.448 256+0 records in 00:06:27.448 256+0 records out 00:06:27.448 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00739809 s, 142 MB/s 00:06:27.448 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.448 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.448 256+0 records in 00:06:27.448 256+0 records out 00:06:27.448 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0656071 s, 16.0 MB/s 00:06:27.448 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.448 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.707 256+0 records in 00:06:27.707 256+0 records out 00:06:27.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0682986 s, 15.4 MB/s 00:06:27.707 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.707 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:27.707 256+0 records in 00:06:27.707 256+0 records out 00:06:27.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0653211 s, 16.1 MB/s 00:06:27.707 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.707 19:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:27.707 256+0 records in 00:06:27.707 256+0 records out 00:06:27.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0652346 s, 16.1 MB/s 00:06:27.707 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.707 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:27.967 256+0 records in 00:06:27.967 256+0 records out 00:06:27.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0640793 s, 16.4 MB/s 00:06:27.967 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.967 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:27.967 256+0 records in 00:06:27.967 256+0 records out 00:06:27.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0687131 s, 15.3 MB/s 00:06:27.967 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:27.967 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:27.967 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.967 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.967 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:27.967 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.967 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.967 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.967 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:27.967 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.967 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:27.967 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.967 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:27.967 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.967 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:27.967 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.967 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:27.967 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.967 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:27.967 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:27.968 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:27.968 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.968 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:27.968 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.968 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:27.968 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.968 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.227 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.227 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.227 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.227 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.227 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.227 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.227 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:28.227 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.227 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.227 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:28.486 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:28.486 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:28.486 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:28.486 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.486 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.486 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:28.486 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:28.486 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.486 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.486 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:28.486 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:28.486 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:28.486 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:28.486 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.486 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.486 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:28.486 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:28.486 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.486 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.486 19:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:28.744 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:28.744 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:28.744 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:28.744 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.744 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.744 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:28.744 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:28.744 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.744 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.744 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:29.002 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:29.002 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:29.002 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:29.002 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.002 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.002 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:29.002 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.002 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.002 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.002 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:29.261 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:29.261 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:29.261 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:29.261 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.261 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.261 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:29.261 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.261 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.261 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.261 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.261 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.519 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:29.519 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:29.519 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.519 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:29.519 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:29.519 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.519 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:29.519 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:29.519 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:29.519 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:29.519 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:29.519 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:29.519 19:08:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:29.519 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.519 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:29.519 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:29.777 malloc_lvol_verify 00:06:29.777 19:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:29.777 b478175a-d4ef-47ba-90a1-6671332e1630 00:06:29.777 19:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:30.034 99258913-0ee6-4a77-9414-97405e883838 00:06:30.034 19:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:30.293 /dev/nbd0 00:06:30.293 19:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:30.293 19:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:30.293 19:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:30.293 19:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:30.293 19:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:30.293 mke2fs 1.47.0 (5-Feb-2023) 00:06:30.293 Discarding device blocks: 0/4096 done 00:06:30.293 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:30.293 00:06:30.293 Allocating group tables: 0/1 done 00:06:30.293 Writing inode tables: 0/1 done 00:06:30.293 Creating journal (1024 blocks): done 00:06:30.293 Writing superblocks and filesystem accounting information: 0/1 done 00:06:30.293 00:06:30.293 19:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:30.293 19:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.293 19:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:30.293 19:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.293 19:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:30.293 19:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.293 19:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.552 19:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.552 19:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.552 19:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.552 19:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.552 19:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.552 19:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.552 19:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:30.552 19:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.552 19:08:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61793 00:06:30.552 19:08:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61793 ']' 00:06:30.552 19:08:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61793 00:06:30.552 19:08:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:30.552 19:08:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.552 19:08:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61793 00:06:30.552 19:08:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.552 19:08:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.552 killing process with pid 61793 00:06:30.552 19:08:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61793' 00:06:30.552 19:08:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61793 00:06:30.552 19:08:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61793 00:06:31.119 19:08:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:31.119 00:06:31.119 real 0m9.400s 00:06:31.119 user 0m13.604s 00:06:31.119 sys 0m3.001s 00:06:31.119 19:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.119 ************************************ 00:06:31.119 END TEST bdev_nbd 00:06:31.119 ************************************ 00:06:31.119 19:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:31.119 19:08:15 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:31.119 skipping fio tests on NVMe due to multi-ns failures. 00:06:31.119 19:08:15 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:31.119 19:08:15 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:31.119 19:08:15 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:31.119 19:08:15 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:31.119 19:08:15 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:31.119 19:08:15 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.119 19:08:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.119 ************************************ 00:06:31.119 START TEST bdev_verify 00:06:31.119 ************************************ 00:06:31.119 19:08:15 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:31.377 [2024-12-16 19:08:15.533521] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:31.377 [2024-12-16 19:08:15.533642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62161 ] 00:06:31.377 [2024-12-16 19:08:15.693606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.662 [2024-12-16 19:08:15.807710] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.662 [2024-12-16 19:08:15.807837] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.236 Running I/O for 5 seconds... 00:06:34.545 26688.00 IOPS, 104.25 MiB/s [2024-12-16T19:08:19.836Z] 26944.00 IOPS, 105.25 MiB/s [2024-12-16T19:08:20.774Z] 26282.67 IOPS, 102.67 MiB/s [2024-12-16T19:08:21.715Z] 25184.00 IOPS, 98.38 MiB/s [2024-12-16T19:08:21.715Z] 24409.60 IOPS, 95.35 MiB/s 00:06:37.361 Latency(us) 00:06:37.361 [2024-12-16T19:08:21.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:37.361 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:37.361 Verification LBA range: start 0x0 length 0xbd0bd 00:06:37.361 Nvme0n1 : 5.07 2069.62 8.08 0.00 0.00 61711.13 11746.07 60494.77 00:06:37.361 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:37.361 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:37.361 Nvme0n1 : 5.06 1972.40 7.70 0.00 0.00 64686.34 13712.15 68964.04 00:06:37.361 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:37.361 Verification LBA range: start 0x0 length 0xa0000 00:06:37.361 Nvme1n1 : 5.07 2068.99 8.08 0.00 0.00 61629.93 13712.15 56461.78 00:06:37.361 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:37.361 Verification LBA range: start 0xa0000 length 0xa0000 00:06:37.361 Nvme1n1 : 5.06 1971.79 7.70 0.00 0.00 64562.22 15224.52 60091.47 00:06:37.361 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:37.361 Verification LBA range: start 0x0 length 0x80000 00:06:37.361 Nvme2n1 : 5.07 2068.35 8.08 0.00 0.00 61538.11 13712.15 54848.59 00:06:37.361 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:37.361 Verification LBA range: start 0x80000 length 0x80000 00:06:37.361 Nvme2n1 : 5.07 1970.51 7.70 0.00 0.00 64440.49 16636.06 60494.77 00:06:37.361 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:37.361 Verification LBA range: start 0x0 length 0x80000 00:06:37.361 Nvme2n2 : 5.08 2067.74 8.08 0.00 0.00 61452.14 14014.62 57671.68 00:06:37.361 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:37.361 Verification LBA range: start 0x80000 length 0x80000 00:06:37.361 Nvme2n2 : 5.07 1969.98 7.70 0.00 0.00 64297.28 15829.46 61704.66 00:06:37.361 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:37.361 Verification LBA range: start 0x0 length 0x80000 00:06:37.361 Nvme2n3 : 5.08 2066.51 8.07 0.00 0.00 61389.01 14417.92 60898.07 00:06:37.361 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:37.361 Verification LBA range: start 0x80000 length 0x80000 00:06:37.361 Nvme2n3 : 5.08 1979.01 7.73 0.00 0.00 63930.78 3327.21 64124.46 00:06:37.361 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:37.361 Verification LBA range: start 0x0 length 0x20000 00:06:37.361 Nvme3n1 : 5.08 2065.03 8.07 0.00 0.00 61314.95 7410.61 63721.16 00:06:37.361 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:37.361 Verification LBA range: start 0x20000 length 0x20000 00:06:37.361 Nvme3n1 : 5.08 1978.48 7.73 0.00 0.00 63890.83 3327.21 63317.86 00:06:37.361 [2024-12-16T19:08:21.715Z] =================================================================================================================== 00:06:37.361 [2024-12-16T19:08:21.715Z] Total : 24248.42 94.72 0.00 0.00 62869.74 3327.21 68964.04 00:06:38.834 00:06:38.834 real 0m7.563s 00:06:38.834 user 0m14.094s 00:06:38.834 sys 0m0.260s 00:06:38.834 19:08:23 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.834 ************************************ 00:06:38.834 END TEST bdev_verify 00:06:38.834 ************************************ 00:06:38.834 19:08:23 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:38.834 19:08:23 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:38.835 19:08:23 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:38.835 19:08:23 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.835 19:08:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:38.835 ************************************ 00:06:38.835 START TEST bdev_verify_big_io 00:06:38.835 ************************************ 00:06:38.835 19:08:23 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:39.094 [2024-12-16 19:08:23.194044] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:39.094 [2024-12-16 19:08:23.194239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62259 ] 00:06:39.094 [2024-12-16 19:08:23.362674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.353 [2024-12-16 19:08:23.530874] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.353 [2024-12-16 19:08:23.530954] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.296 Running I/O for 5 seconds... 00:06:45.496 587.00 IOPS, 36.69 MiB/s [2024-12-16T19:08:30.414Z] 1652.00 IOPS, 103.25 MiB/s [2024-12-16T19:08:30.982Z] 2393.33 IOPS, 149.58 MiB/s 00:06:46.628 Latency(us) 00:06:46.628 [2024-12-16T19:08:30.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:46.628 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:46.628 Verification LBA range: start 0x0 length 0xbd0b 00:06:46.628 Nvme0n1 : 5.77 132.16 8.26 0.00 0.00 941043.24 19963.27 1051802.39 00:06:46.628 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:46.628 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:46.628 Nvme0n1 : 5.92 79.22 4.95 0.00 0.00 1552069.41 23592.96 1664816.05 00:06:46.628 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:46.628 Verification LBA range: start 0x0 length 0xa000 00:06:46.628 Nvme1n1 : 5.78 129.68 8.11 0.00 0.00 920174.44 83482.78 884030.23 00:06:46.628 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:46.628 Verification LBA range: start 0xa000 length 0xa000 00:06:46.628 Nvme1n1 : 5.96 82.57 5.16 0.00 0.00 1407280.88 64527.75 1516402.22 00:06:46.628 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:46.628 Verification LBA range: start 0x0 length 0x8000 00:06:46.628 Nvme2n1 : 5.78 132.92 8.31 0.00 0.00 878318.15 99614.72 825955.25 00:06:46.628 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:46.628 Verification LBA range: start 0x8000 length 0x8000 00:06:46.628 Nvme2n1 : 5.96 85.90 5.37 0.00 0.00 1284363.03 35288.62 1451874.46 00:06:46.629 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:46.629 Verification LBA range: start 0x0 length 0x8000 00:06:46.629 Nvme2n2 : 5.86 134.75 8.42 0.00 0.00 835498.11 82676.18 851766.35 00:06:46.629 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:46.629 Verification LBA range: start 0x8000 length 0x8000 00:06:46.629 Nvme2n2 : 6.05 101.67 6.35 0.00 0.00 1035008.33 12351.02 1490591.11 00:06:46.629 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:46.629 Verification LBA range: start 0x0 length 0x8000 00:06:46.629 Nvme2n3 : 5.93 147.38 9.21 0.00 0.00 749264.86 30852.33 903388.55 00:06:46.629 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:46.629 Verification LBA range: start 0x8000 length 0x8000 00:06:46.629 Nvme2n3 : 6.17 142.54 8.91 0.00 0.00 714891.00 6150.30 1522854.99 00:06:46.629 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:46.629 Verification LBA range: start 0x0 length 0x2000 00:06:46.629 Nvme3n1 : 5.93 154.72 9.67 0.00 0.00 692014.39 831.80 1000180.18 00:06:46.629 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:46.629 Verification LBA range: start 0x2000 length 0x2000 00:06:46.629 Nvme3n1 : 6.39 280.54 17.53 0.00 0.00 347112.90 406.45 1548666.09 00:06:46.629 [2024-12-16T19:08:30.983Z] =================================================================================================================== 00:06:46.629 [2024-12-16T19:08:30.983Z] Total : 1604.07 100.25 0.00 0.00 831180.48 406.45 1664816.05 00:06:48.008 00:06:48.008 real 0m9.033s 00:06:48.008 user 0m16.865s 00:06:48.008 sys 0m0.403s 00:06:48.008 19:08:32 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.008 ************************************ 00:06:48.008 END TEST bdev_verify_big_io 00:06:48.008 ************************************ 00:06:48.008 19:08:32 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:48.008 19:08:32 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:48.008 19:08:32 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:48.008 19:08:32 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.008 19:08:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:48.008 ************************************ 00:06:48.008 START TEST bdev_write_zeroes 00:06:48.008 ************************************ 00:06:48.008 19:08:32 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:48.008 [2024-12-16 19:08:32.252126] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:48.008 [2024-12-16 19:08:32.252268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62381 ] 00:06:48.267 [2024-12-16 19:08:32.411987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.267 [2024-12-16 19:08:32.511632] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.837 Running I/O for 1 seconds... 00:06:49.781 67584.00 IOPS, 264.00 MiB/s 00:06:49.781 Latency(us) 00:06:49.781 [2024-12-16T19:08:34.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:49.781 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:49.781 Nvme0n1 : 1.02 11240.13 43.91 0.00 0.00 11365.24 7561.85 21374.82 00:06:49.781 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:49.781 Nvme1n1 : 1.02 11227.35 43.86 0.00 0.00 11365.48 7713.08 21173.17 00:06:49.781 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:49.781 Nvme2n1 : 1.02 11214.62 43.81 0.00 0.00 11358.35 7612.26 19963.27 00:06:49.781 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:49.781 Nvme2n2 : 1.02 11201.64 43.76 0.00 0.00 11340.83 7561.85 18955.03 00:06:49.781 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:49.781 Nvme2n3 : 1.02 11189.01 43.71 0.00 0.00 11328.45 7511.43 20064.10 00:06:49.781 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:49.781 Nvme3n1 : 1.03 11176.43 43.66 0.00 0.00 11306.89 6175.51 21576.47 00:06:49.781 [2024-12-16T19:08:34.135Z] =================================================================================================================== 00:06:49.781 [2024-12-16T19:08:34.135Z] Total : 67249.19 262.69 0.00 0.00 11344.20 6175.51 21576.47 00:06:50.724 00:06:50.724 real 0m2.684s 00:06:50.724 user 0m2.364s 00:06:50.724 sys 0m0.206s 00:06:50.724 19:08:34 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.724 19:08:34 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:50.724 ************************************ 00:06:50.724 END TEST bdev_write_zeroes 00:06:50.724 ************************************ 00:06:50.724 19:08:34 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:50.724 19:08:34 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:50.724 19:08:34 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.724 19:08:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:50.724 ************************************ 00:06:50.724 START TEST bdev_json_nonenclosed 00:06:50.724 ************************************ 00:06:50.724 19:08:34 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:50.724 [2024-12-16 19:08:34.979983] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:50.724 [2024-12-16 19:08:34.980109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62435 ] 00:06:50.987 [2024-12-16 19:08:35.133169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.987 [2024-12-16 19:08:35.243338] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.987 [2024-12-16 19:08:35.243429] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:50.987 [2024-12-16 19:08:35.243448] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:50.987 [2024-12-16 19:08:35.243457] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.250 00:06:51.250 real 0m0.510s 00:06:51.250 user 0m0.308s 00:06:51.250 sys 0m0.099s 00:06:51.250 19:08:35 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.250 ************************************ 00:06:51.250 END TEST bdev_json_nonenclosed 00:06:51.250 ************************************ 00:06:51.250 19:08:35 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:51.250 19:08:35 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:51.250 19:08:35 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:51.250 19:08:35 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.250 19:08:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.250 ************************************ 00:06:51.250 START TEST bdev_json_nonarray 00:06:51.250 ************************************ 00:06:51.250 19:08:35 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:51.250 [2024-12-16 19:08:35.529532] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:51.250 [2024-12-16 19:08:35.529654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62455 ] 00:06:51.511 [2024-12-16 19:08:35.682970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.511 [2024-12-16 19:08:35.791113] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.511 [2024-12-16 19:08:35.791226] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:51.511 [2024-12-16 19:08:35.791245] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:51.511 [2024-12-16 19:08:35.791255] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.772 00:06:51.772 real 0m0.513s 00:06:51.772 user 0m0.316s 00:06:51.772 sys 0m0.091s 00:06:51.772 19:08:35 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.772 ************************************ 00:06:51.772 END TEST bdev_json_nonarray 00:06:51.772 ************************************ 00:06:51.772 19:08:35 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:51.772 19:08:36 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:51.772 19:08:36 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:51.772 19:08:36 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:51.772 19:08:36 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:51.772 19:08:36 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:51.772 19:08:36 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:51.772 19:08:36 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:51.772 19:08:36 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:51.772 19:08:36 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:51.772 19:08:36 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:51.772 19:08:36 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:51.772 00:06:51.772 real 0m37.061s 00:06:51.772 user 0m57.565s 00:06:51.772 sys 0m5.337s 00:06:51.772 19:08:36 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.772 19:08:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.772 ************************************ 00:06:51.772 END TEST blockdev_nvme 00:06:51.772 ************************************ 00:06:51.772 19:08:36 -- spdk/autotest.sh@209 -- # uname -s 00:06:51.773 19:08:36 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:51.773 19:08:36 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:51.773 19:08:36 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:51.773 19:08:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.773 19:08:36 -- common/autotest_common.sh@10 -- # set +x 00:06:51.773 ************************************ 00:06:51.773 START TEST blockdev_nvme_gpt 00:06:51.773 ************************************ 00:06:51.773 19:08:36 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:51.773 * Looking for test storage... 00:06:52.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:52.034 19:08:36 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:52.034 19:08:36 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:06:52.034 19:08:36 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:52.034 19:08:36 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.034 19:08:36 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:52.034 19:08:36 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.034 19:08:36 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:52.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.034 --rc genhtml_branch_coverage=1 00:06:52.034 --rc genhtml_function_coverage=1 00:06:52.034 --rc genhtml_legend=1 00:06:52.034 --rc geninfo_all_blocks=1 00:06:52.034 --rc geninfo_unexecuted_blocks=1 00:06:52.034 00:06:52.034 ' 00:06:52.034 19:08:36 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:52.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.034 --rc genhtml_branch_coverage=1 00:06:52.034 --rc genhtml_function_coverage=1 00:06:52.034 --rc genhtml_legend=1 00:06:52.034 --rc geninfo_all_blocks=1 00:06:52.034 --rc geninfo_unexecuted_blocks=1 00:06:52.034 00:06:52.034 ' 00:06:52.034 19:08:36 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:52.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.034 --rc genhtml_branch_coverage=1 00:06:52.034 --rc genhtml_function_coverage=1 00:06:52.034 --rc genhtml_legend=1 00:06:52.034 --rc geninfo_all_blocks=1 00:06:52.034 --rc geninfo_unexecuted_blocks=1 00:06:52.034 00:06:52.034 ' 00:06:52.034 19:08:36 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:52.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.034 --rc genhtml_branch_coverage=1 00:06:52.034 --rc genhtml_function_coverage=1 00:06:52.034 --rc genhtml_legend=1 00:06:52.034 --rc geninfo_all_blocks=1 00:06:52.034 --rc geninfo_unexecuted_blocks=1 00:06:52.034 00:06:52.034 ' 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62539 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62539 00:06:52.034 19:08:36 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62539 ']' 00:06:52.034 19:08:36 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.034 19:08:36 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.034 19:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:52.035 19:08:36 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.035 19:08:36 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.035 19:08:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:52.035 [2024-12-16 19:08:36.283683] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:52.035 [2024-12-16 19:08:36.283797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62539 ] 00:06:52.296 [2024-12-16 19:08:36.442573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.296 [2024-12-16 19:08:36.551258] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.869 19:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.869 19:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:52.869 19:08:37 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:52.869 19:08:37 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:06:52.869 19:08:37 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:53.128 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:53.387 Waiting for block devices as requested 00:06:53.387 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:53.387 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:53.646 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:53.646 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:58.914 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:58.914 19:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:58.914 19:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:58.914 19:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:58.914 19:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:58.914 19:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:58.914 19:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:58.914 19:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:58.914 19:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:58.914 19:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:58.914 19:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:58.914 BYT; 00:06:58.914 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:58.914 19:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:58.914 BYT; 00:06:58.914 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:58.914 19:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:58.914 19:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:58.914 19:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:58.914 19:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:58.914 19:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:58.914 19:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:58.914 19:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:58.914 19:08:42 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:58.914 19:08:42 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:58.914 19:08:42 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:58.914 19:08:42 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:58.914 19:08:42 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:58.914 19:08:42 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:58.914 19:08:42 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:58.914 19:08:42 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:58.914 19:08:42 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:58.914 19:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:58.914 19:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:58.914 19:08:42 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:58.914 19:08:42 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:58.914 19:08:42 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:58.914 19:08:42 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:58.914 19:08:42 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:58.914 19:08:42 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:58.914 19:08:42 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:58.914 19:08:42 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:58.914 19:08:42 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:58.914 19:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:58.914 19:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:59.849 The operation has completed successfully. 00:06:59.849 19:08:43 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:00.783 The operation has completed successfully. 00:07:00.783 19:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:01.041 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:01.608 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:01.608 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:01.608 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:01.608 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:01.866 19:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:01.866 19:08:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.866 19:08:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:01.866 [] 00:07:01.866 19:08:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.866 19:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:01.866 19:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:01.866 19:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:01.866 19:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:01.866 19:08:46 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:01.866 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.866 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:02.126 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.126 19:08:46 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:02.126 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.126 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:02.126 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.126 19:08:46 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:07:02.126 19:08:46 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:02.126 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.126 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:02.126 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.126 19:08:46 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:02.126 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.126 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:02.126 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.126 19:08:46 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:02.126 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.126 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:02.126 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.126 19:08:46 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:02.126 19:08:46 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:02.126 19:08:46 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:02.126 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.126 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:02.126 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.126 19:08:46 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:02.126 19:08:46 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:02.127 19:08:46 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "2db80eb1-e078-425f-beec-533008ce454e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2db80eb1-e078-425f-beec-533008ce454e",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "05a5b329-68e3-4415-b115-edf747dd8c63"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "05a5b329-68e3-4415-b115-edf747dd8c63",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "0f3e923f-5f66-44cc-9343-55b0debe2a07"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0f3e923f-5f66-44cc-9343-55b0debe2a07",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "6faa5eb2-8476-4f13-8e31-4323296abf66"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6faa5eb2-8476-4f13-8e31-4323296abf66",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "efaa4235-41ae-4f40-8235-bd55a6818b93"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "efaa4235-41ae-4f40-8235-bd55a6818b93",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:02.127 19:08:46 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:02.127 19:08:46 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:02.127 19:08:46 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:02.127 19:08:46 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62539 00:07:02.127 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62539 ']' 00:07:02.127 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62539 00:07:02.127 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:02.127 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.127 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62539 00:07:02.127 killing process with pid 62539 00:07:02.127 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.127 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.127 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62539' 00:07:02.127 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62539 00:07:02.127 19:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62539 00:07:03.503 19:08:47 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:03.503 19:08:47 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:03.503 19:08:47 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:03.503 19:08:47 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.503 19:08:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.503 ************************************ 00:07:03.503 START TEST bdev_hello_world 00:07:03.503 ************************************ 00:07:03.503 19:08:47 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:03.503 [2024-12-16 19:08:47.819709] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:03.503 [2024-12-16 19:08:47.819824] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63156 ] 00:07:03.762 [2024-12-16 19:08:47.978060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.762 [2024-12-16 19:08:48.086014] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.329 [2024-12-16 19:08:48.650833] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:04.329 [2024-12-16 19:08:48.651058] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:04.329 [2024-12-16 19:08:48.651088] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:04.329 [2024-12-16 19:08:48.653715] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:04.329 [2024-12-16 19:08:48.654160] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:04.329 [2024-12-16 19:08:48.654200] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:04.329 [2024-12-16 19:08:48.654354] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:04.329 00:07:04.329 [2024-12-16 19:08:48.654373] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:05.266 00:07:05.266 real 0m1.661s 00:07:05.266 user 0m1.351s 00:07:05.266 sys 0m0.203s 00:07:05.266 19:08:49 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.266 19:08:49 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:05.266 ************************************ 00:07:05.266 END TEST bdev_hello_world 00:07:05.266 ************************************ 00:07:05.266 19:08:49 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:05.266 19:08:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:05.266 19:08:49 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.266 19:08:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:05.266 ************************************ 00:07:05.266 START TEST bdev_bounds 00:07:05.266 ************************************ 00:07:05.266 19:08:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:05.266 19:08:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63198 00:07:05.266 19:08:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:05.266 19:08:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63198' 00:07:05.266 Process bdevio pid: 63198 00:07:05.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.266 19:08:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63198 00:07:05.266 19:08:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:05.266 19:08:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63198 ']' 00:07:05.266 19:08:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.266 19:08:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.266 19:08:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.266 19:08:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.266 19:08:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:05.266 [2024-12-16 19:08:49.521675] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:05.266 [2024-12-16 19:08:49.521790] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63198 ] 00:07:05.525 [2024-12-16 19:08:49.676944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.525 [2024-12-16 19:08:49.791243] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.525 [2024-12-16 19:08:49.791355] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.525 [2024-12-16 19:08:49.791441] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.091 19:08:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.091 19:08:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:06.091 19:08:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:06.349 I/O targets: 00:07:06.349 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:06.349 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:06.349 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:06.349 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:06.349 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:06.349 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:06.349 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:06.349 00:07:06.349 00:07:06.349 CUnit - A unit testing framework for C - Version 2.1-3 00:07:06.349 http://cunit.sourceforge.net/ 00:07:06.349 00:07:06.349 00:07:06.349 Suite: bdevio tests on: Nvme3n1 00:07:06.349 Test: blockdev write read block ...passed 00:07:06.349 Test: blockdev write zeroes read block ...passed 00:07:06.349 Test: blockdev write zeroes read no split ...passed 00:07:06.349 Test: blockdev write zeroes read split ...passed 00:07:06.349 Test: blockdev write zeroes read split partial ...passed 00:07:06.349 Test: blockdev reset ...[2024-12-16 19:08:50.526833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:06.349 passed 00:07:06.349 Test: blockdev write read 8 blocks ...[2024-12-16 19:08:50.529935] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:06.349 passed 00:07:06.349 Test: blockdev write read size > 128k ...passed 00:07:06.349 Test: blockdev write read invalid size ...passed 00:07:06.349 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:06.349 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:06.349 Test: blockdev write read max offset ...passed 00:07:06.349 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:06.349 Test: blockdev writev readv 8 blocks ...passed 00:07:06.349 Test: blockdev writev readv 30 x 1block ...passed 00:07:06.349 Test: blockdev writev readv block ...passed 00:07:06.349 Test: blockdev writev readv size > 128k ...passed 00:07:06.349 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:06.349 Test: blockdev comparev and writev ...[2024-12-16 19:08:50.537288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb404000 len:0x1000 00:07:06.349 [2024-12-16 19:08:50.537336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:06.349 passed 00:07:06.349 Test: blockdev nvme passthru rw ...passed 00:07:06.349 Test: blockdev nvme passthru vendor specific ...[2024-12-16 19:08:50.538217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:06.349 [2024-12-16 19:08:50.538268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:06.349 passed 00:07:06.349 Test: blockdev nvme admin passthru ...passed 00:07:06.349 Test: blockdev copy ...passed 00:07:06.349 Suite: bdevio tests on: Nvme2n3 00:07:06.349 Test: blockdev write read block ...passed 00:07:06.349 Test: blockdev write zeroes read block ...passed 00:07:06.349 Test: blockdev write zeroes read no split ...passed 00:07:06.349 Test: blockdev write zeroes read split ...passed 00:07:06.349 Test: blockdev write zeroes read split partial ...passed 00:07:06.349 Test: blockdev reset ...[2024-12-16 19:08:50.593129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:06.349 [2024-12-16 19:08:50.596493] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:06.349 Test: blockdev write read 8 blocks ...uccessful. 00:07:06.349 passed 00:07:06.349 Test: blockdev write read size > 128k ...passed 00:07:06.349 Test: blockdev write read invalid size ...passed 00:07:06.349 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:06.349 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:06.349 Test: blockdev write read max offset ...passed 00:07:06.349 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:06.349 Test: blockdev writev readv 8 blocks ...passed 00:07:06.349 Test: blockdev writev readv 30 x 1block ...passed 00:07:06.349 Test: blockdev writev readv block ...passed 00:07:06.349 Test: blockdev writev readv size > 128k ...passed 00:07:06.349 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:06.349 Test: blockdev comparev and writev ...[2024-12-16 19:08:50.603595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb402000 len:0x1000 00:07:06.349 [2024-12-16 19:08:50.603746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:passed 00:07:06.349 Test: blockdev nvme passthru rw ...0 sqhd:0018 p:1 m:0 dnr:1 00:07:06.349 passed 00:07:06.349 Test: blockdev nvme passthru vendor specific ...passed 00:07:06.349 Test: blockdev nvme admin passthru ...[2024-12-16 19:08:50.604823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:06.349 [2024-12-16 19:08:50.604857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:06.349 passed 00:07:06.349 Test: blockdev copy ...passed 00:07:06.349 Suite: bdevio tests on: Nvme2n2 00:07:06.349 Test: blockdev write read block ...passed 00:07:06.349 Test: blockdev write zeroes read block ...passed 00:07:06.349 Test: blockdev write zeroes read no split ...passed 00:07:06.349 Test: blockdev write zeroes read split ...passed 00:07:06.349 Test: blockdev write zeroes read split partial ...passed 00:07:06.349 Test: blockdev reset ...[2024-12-16 19:08:50.660787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:06.349 [2024-12-16 19:08:50.663754] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:06.349 passed 00:07:06.349 Test: blockdev write read 8 blocks ...passed 00:07:06.349 Test: blockdev write read size > 128k ...passed 00:07:06.349 Test: blockdev write read invalid size ...passed 00:07:06.349 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:06.349 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:06.349 Test: blockdev write read max offset ...passed 00:07:06.349 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:06.349 Test: blockdev writev readv 8 blocks ...passed 00:07:06.349 Test: blockdev writev readv 30 x 1block ...passed 00:07:06.349 Test: blockdev writev readv block ...passed 00:07:06.349 Test: blockdev writev readv size > 128k ...passed 00:07:06.349 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:06.349 Test: blockdev comparev and writev ...[2024-12-16 19:08:50.671492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5c38000 len:0x1000 00:07:06.349 [2024-12-16 19:08:50.671528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:06.349 passed 00:07:06.349 Test: blockdev nvme passthru rw ...passed 00:07:06.350 Test: blockdev nvme passthru vendor specific ...passed 00:07:06.350 Test: blockdev nvme admin passthru ...[2024-12-16 19:08:50.672158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:06.350 [2024-12-16 19:08:50.672197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:06.350 passed 00:07:06.350 Test: blockdev copy ...passed 00:07:06.350 Suite: bdevio tests on: Nvme2n1 00:07:06.350 Test: blockdev write read block ...passed 00:07:06.350 Test: blockdev write zeroes read block ...passed 00:07:06.350 Test: blockdev write zeroes read no split ...passed 00:07:06.607 Test: blockdev write zeroes read split ...passed 00:07:06.607 Test: blockdev write zeroes read split partial ...passed 00:07:06.607 Test: blockdev reset ...[2024-12-16 19:08:50.731004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:06.607 [2024-12-16 19:08:50.733926] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:06.607 passed 00:07:06.607 Test: blockdev write read 8 blocks ...passed 00:07:06.607 Test: blockdev write read size > 128k ...passed 00:07:06.607 Test: blockdev write read invalid size ...passed 00:07:06.607 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:06.607 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:06.607 Test: blockdev write read max offset ...passed 00:07:06.607 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:06.607 Test: blockdev writev readv 8 blocks ...passed 00:07:06.607 Test: blockdev writev readv 30 x 1block ...passed 00:07:06.607 Test: blockdev writev readv block ...passed 00:07:06.607 Test: blockdev writev readv size > 128k ...passed 00:07:06.607 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:06.607 Test: blockdev comparev and writev ...[2024-12-16 19:08:50.741023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5c34000 len:0x1000 00:07:06.607 [2024-12-16 19:08:50.741193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:06.607 passed 00:07:06.607 Test: blockdev nvme passthru rw ...passed 00:07:06.607 Test: blockdev nvme passthru vendor specific ...[2024-12-16 19:08:50.742385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:06.607 [2024-12-16 19:08:50.742517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed sqhd:001c p:1 m:0 dnr:1 00:07:06.607 00:07:06.607 Test: blockdev nvme admin passthru ...passed 00:07:06.607 Test: blockdev copy ...passed 00:07:06.607 Suite: bdevio tests on: Nvme1n1p2 00:07:06.608 Test: blockdev write read block ...passed 00:07:06.608 Test: blockdev write zeroes read block ...passed 00:07:06.608 Test: blockdev write zeroes read no split ...passed 00:07:06.608 Test: blockdev write zeroes read split ...passed 00:07:06.608 Test: blockdev write zeroes read split partial ...passed 00:07:06.608 Test: blockdev reset ...[2024-12-16 19:08:50.800759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:06.608 passed 00:07:06.608 Test: blockdev write read 8 blocks ...[2024-12-16 19:08:50.803377] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:06.608 passed 00:07:06.608 Test: blockdev write read size > 128k ...passed 00:07:06.608 Test: blockdev write read invalid size ...passed 00:07:06.608 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:06.608 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:06.608 Test: blockdev write read max offset ...passed 00:07:06.608 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:06.608 Test: blockdev writev readv 8 blocks ...passed 00:07:06.608 Test: blockdev writev readv 30 x 1block ...passed 00:07:06.608 Test: blockdev writev readv block ...passed 00:07:06.608 Test: blockdev writev readv size > 128k ...passed 00:07:06.608 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:06.608 Test: blockdev comparev and writev ...[2024-12-16 19:08:50.809132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d5c30000 len:0x1000 00:07:06.608 [2024-12-16 19:08:50.809191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:06.608 passed 00:07:06.608 Test: blockdev nvme passthru rw ...passed 00:07:06.608 Test: blockdev nvme passthru vendor specific ...passed 00:07:06.608 Test: blockdev nvme admin passthru ...passed 00:07:06.608 Test: blockdev copy ...passed 00:07:06.608 Suite: bdevio tests on: Nvme1n1p1 00:07:06.608 Test: blockdev write read block ...passed 00:07:06.608 Test: blockdev write zeroes read block ...passed 00:07:06.608 Test: blockdev write zeroes read no split ...passed 00:07:06.608 Test: blockdev write zeroes read split ...passed 00:07:06.608 Test: blockdev write zeroes read split partial ...passed 00:07:06.608 Test: blockdev reset ...[2024-12-16 19:08:50.851331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:06.608 [2024-12-16 19:08:50.854233] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:06.608 passed 00:07:06.608 Test: blockdev write read 8 blocks ...passed 00:07:06.608 Test: blockdev write read size > 128k ...passed 00:07:06.608 Test: blockdev write read invalid size ...passed 00:07:06.608 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:06.608 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:06.608 Test: blockdev write read max offset ...passed 00:07:06.608 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:06.608 Test: blockdev writev readv 8 blocks ...passed 00:07:06.608 Test: blockdev writev readv 30 x 1block ...passed 00:07:06.608 Test: blockdev writev readv block ...passed 00:07:06.608 Test: blockdev writev readv size > 128k ...passed 00:07:06.608 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:06.608 Test: blockdev comparev and writev ...[2024-12-16 19:08:50.862047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2bbe0e000 len:0x1000 00:07:06.608 [2024-12-16 19:08:50.862208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:06.608 passed 00:07:06.608 Test: blockdev nvme passthru rw ...passed 00:07:06.608 Test: blockdev nvme passthru vendor specific ...passed 00:07:06.608 Test: blockdev nvme admin passthru ...passed 00:07:06.608 Test: blockdev copy ...passed 00:07:06.608 Suite: bdevio tests on: Nvme0n1 00:07:06.608 Test: blockdev write read block ...passed 00:07:06.608 Test: blockdev write zeroes read block ...passed 00:07:06.608 Test: blockdev write zeroes read no split ...passed 00:07:06.608 Test: blockdev write zeroes read split ...passed 00:07:06.608 Test: blockdev write zeroes read split partial ...passed 00:07:06.608 Test: blockdev reset ...[2024-12-16 19:08:50.905048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:06.608 [2024-12-16 19:08:50.907767] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spasseduccessful. 00:07:06.608 00:07:06.608 Test: blockdev write read 8 blocks ...passed 00:07:06.608 Test: blockdev write read size > 128k ...passed 00:07:06.608 Test: blockdev write read invalid size ...passed 00:07:06.608 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:06.608 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:06.608 Test: blockdev write read max offset ...passed 00:07:06.608 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:06.608 Test: blockdev writev readv 8 blocks ...passed 00:07:06.608 Test: blockdev writev readv 30 x 1block ...passed 00:07:06.608 Test: blockdev writev readv block ...passed 00:07:06.608 Test: blockdev writev readv size > 128k ...passed 00:07:06.608 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:06.608 Test: blockdev comparev and writev ...[2024-12-16 19:08:50.914476] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:06.608 separate metadata which is not supported yet. 00:07:06.608 passed 00:07:06.608 Test: blockdev nvme passthru rw ...passed 00:07:06.608 Test: blockdev nvme passthru vendor specific ...[2024-12-16 19:08:50.915055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:06.608 [2024-12-16 19:08:50.915133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed 00:07:06.608 Test: blockdev nvme admin passthru ... sqhd:0017 p:1 m:0 dnr:1 00:07:06.608 passed 00:07:06.608 Test: blockdev copy ...passed 00:07:06.608 00:07:06.608 Run Summary: Type Total Ran Passed Failed Inactive 00:07:06.608 suites 7 7 n/a 0 0 00:07:06.608 tests 161 161 161 0 0 00:07:06.608 asserts 1025 1025 1025 0 n/a 00:07:06.608 00:07:06.608 Elapsed time = 1.169 seconds 00:07:06.608 0 00:07:06.608 19:08:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63198 00:07:06.608 19:08:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63198 ']' 00:07:06.608 19:08:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63198 00:07:06.608 19:08:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:06.608 19:08:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.608 19:08:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63198 00:07:06.868 19:08:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.868 19:08:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.868 19:08:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63198' 00:07:06.868 killing process with pid 63198 00:07:06.868 19:08:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63198 00:07:06.868 19:08:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63198 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:07.485 00:07:07.485 real 0m2.206s 00:07:07.485 user 0m5.629s 00:07:07.485 sys 0m0.296s 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:07.485 ************************************ 00:07:07.485 END TEST bdev_bounds 00:07:07.485 ************************************ 00:07:07.485 19:08:51 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:07.485 19:08:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:07.485 19:08:51 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.485 19:08:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:07.485 ************************************ 00:07:07.485 START TEST bdev_nbd 00:07:07.485 ************************************ 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:07.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63252 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63252 /var/tmp/spdk-nbd.sock 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63252 ']' 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.485 19:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:07.485 [2024-12-16 19:08:51.767265] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:07.485 [2024-12-16 19:08:51.767364] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.744 [2024-12-16 19:08:51.918627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.744 [2024-12-16 19:08:52.019841] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.311 19:08:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.311 19:08:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:08.311 19:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:08.311 19:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.311 19:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:08.311 19:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:08.311 19:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:08.311 19:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.311 19:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:08.311 19:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:08.311 19:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:08.311 19:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:08.311 19:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:08.311 19:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:08.311 19:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:08.570 19:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:08.570 19:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:08.570 19:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:08.570 19:08:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:08.570 19:08:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.570 19:08:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.570 19:08:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.570 19:08:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:08.570 19:08:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.570 19:08:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.570 19:08:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.570 19:08:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.570 1+0 records in 00:07:08.570 1+0 records out 00:07:08.570 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245423 s, 16.7 MB/s 00:07:08.570 19:08:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.570 19:08:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.570 19:08:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.570 19:08:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.570 19:08:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.570 19:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:08.570 19:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:08.570 19:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:08.827 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:08.827 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:08.827 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:08.827 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:08.827 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.827 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.828 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.828 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:08.828 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.828 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.828 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.828 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.828 1+0 records in 00:07:08.828 1+0 records out 00:07:08.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302944 s, 13.5 MB/s 00:07:08.828 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.828 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.828 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.828 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.828 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.828 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:08.828 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:08.828 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:09.086 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:09.086 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:09.086 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:09.086 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:09.086 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.086 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.086 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.086 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:09.086 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.086 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.086 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.086 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.086 1+0 records in 00:07:09.086 1+0 records out 00:07:09.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344906 s, 11.9 MB/s 00:07:09.086 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.086 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.086 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.086 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.086 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.086 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:09.086 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:09.086 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:09.344 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:09.344 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:09.344 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:09.344 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:09.344 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.344 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.344 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.344 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:09.344 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.344 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.344 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.344 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.344 1+0 records in 00:07:09.344 1+0 records out 00:07:09.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331978 s, 12.3 MB/s 00:07:09.345 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.345 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.345 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.345 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.345 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.345 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:09.345 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:09.345 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:09.603 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:09.603 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:09.603 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:09.603 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:09.603 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.603 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.603 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.603 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:09.603 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.603 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.603 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.603 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.603 1+0 records in 00:07:09.603 1+0 records out 00:07:09.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601442 s, 6.8 MB/s 00:07:09.603 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.603 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.603 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.603 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.603 19:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.603 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:09.603 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:09.603 19:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:09.862 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:09.862 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:09.862 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:09.862 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:09.862 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.862 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.862 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.862 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:09.862 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.862 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.862 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.862 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.862 1+0 records in 00:07:09.862 1+0 records out 00:07:09.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454271 s, 9.0 MB/s 00:07:09.862 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.862 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.862 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.862 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.862 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.862 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:09.862 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:09.862 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:10.120 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:10.120 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:10.120 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:10.120 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:10.120 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:10.120 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:10.120 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:10.120 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:10.120 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:10.120 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:10.120 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:10.120 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.120 1+0 records in 00:07:10.120 1+0 records out 00:07:10.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467148 s, 8.8 MB/s 00:07:10.120 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.120 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:10.120 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.120 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:10.120 19:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:10.120 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:10.120 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:10.120 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:10.378 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:10.378 { 00:07:10.378 "nbd_device": "/dev/nbd0", 00:07:10.378 "bdev_name": "Nvme0n1" 00:07:10.378 }, 00:07:10.378 { 00:07:10.378 "nbd_device": "/dev/nbd1", 00:07:10.378 "bdev_name": "Nvme1n1p1" 00:07:10.378 }, 00:07:10.378 { 00:07:10.378 "nbd_device": "/dev/nbd2", 00:07:10.378 "bdev_name": "Nvme1n1p2" 00:07:10.378 }, 00:07:10.378 { 00:07:10.378 "nbd_device": "/dev/nbd3", 00:07:10.378 "bdev_name": "Nvme2n1" 00:07:10.378 }, 00:07:10.378 { 00:07:10.378 "nbd_device": "/dev/nbd4", 00:07:10.378 "bdev_name": "Nvme2n2" 00:07:10.378 }, 00:07:10.378 { 00:07:10.378 "nbd_device": "/dev/nbd5", 00:07:10.378 "bdev_name": "Nvme2n3" 00:07:10.378 }, 00:07:10.378 { 00:07:10.378 "nbd_device": "/dev/nbd6", 00:07:10.378 "bdev_name": "Nvme3n1" 00:07:10.378 } 00:07:10.378 ]' 00:07:10.378 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:10.378 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:10.378 { 00:07:10.378 "nbd_device": "/dev/nbd0", 00:07:10.378 "bdev_name": "Nvme0n1" 00:07:10.378 }, 00:07:10.378 { 00:07:10.378 "nbd_device": "/dev/nbd1", 00:07:10.378 "bdev_name": "Nvme1n1p1" 00:07:10.378 }, 00:07:10.378 { 00:07:10.378 "nbd_device": "/dev/nbd2", 00:07:10.378 "bdev_name": "Nvme1n1p2" 00:07:10.378 }, 00:07:10.378 { 00:07:10.378 "nbd_device": "/dev/nbd3", 00:07:10.378 "bdev_name": "Nvme2n1" 00:07:10.378 }, 00:07:10.378 { 00:07:10.378 "nbd_device": "/dev/nbd4", 00:07:10.378 "bdev_name": "Nvme2n2" 00:07:10.378 }, 00:07:10.379 { 00:07:10.379 "nbd_device": "/dev/nbd5", 00:07:10.379 "bdev_name": "Nvme2n3" 00:07:10.379 }, 00:07:10.379 { 00:07:10.379 "nbd_device": "/dev/nbd6", 00:07:10.379 "bdev_name": "Nvme3n1" 00:07:10.379 } 00:07:10.379 ]' 00:07:10.379 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:10.379 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:10.379 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.379 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:10.379 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.379 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:10.379 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.379 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:10.637 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.637 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.637 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.637 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.637 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.637 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.637 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.637 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.637 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.637 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:10.637 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:10.637 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:10.637 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:10.637 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.637 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.637 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:10.637 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.637 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.637 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.637 19:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:10.894 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:10.894 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:10.894 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:10.894 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.894 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.894 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:10.894 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.894 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.894 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.894 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:11.152 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:11.152 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:11.152 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:11.152 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.152 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.152 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:11.152 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.152 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.152 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.152 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:11.410 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:11.410 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:11.410 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:11.410 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.410 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.410 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:11.410 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.410 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.410 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.410 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:11.668 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:11.668 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:11.668 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:11.668 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.668 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.668 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:11.668 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.668 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.668 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.668 19:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:11.927 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:11.927 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:11.927 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:11.927 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.927 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.927 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:11.927 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.927 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.927 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.927 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.927 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.927 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:11.927 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:11.927 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:12.185 /dev/nbd0 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.185 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.186 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.186 1+0 records in 00:07:12.186 1+0 records out 00:07:12.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244657 s, 16.7 MB/s 00:07:12.186 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.186 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.186 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.186 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.186 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.186 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.186 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:12.186 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:12.444 /dev/nbd1 00:07:12.444 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:12.444 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:12.444 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:12.444 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.444 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.444 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.444 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:12.444 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.444 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.444 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.444 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.444 1+0 records in 00:07:12.444 1+0 records out 00:07:12.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072168 s, 5.7 MB/s 00:07:12.444 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.444 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.444 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.444 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.444 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.444 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.444 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:12.444 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:12.702 /dev/nbd10 00:07:12.702 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:12.702 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:12.702 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:12.702 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.702 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.702 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.702 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:12.703 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.703 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.703 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.703 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.703 1+0 records in 00:07:12.703 1+0 records out 00:07:12.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488749 s, 8.4 MB/s 00:07:12.703 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.703 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.703 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.703 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.703 19:08:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.703 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.703 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:12.703 19:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:12.967 /dev/nbd11 00:07:12.967 19:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:12.967 19:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:12.967 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:12.967 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.967 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.967 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.967 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:12.967 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.967 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.967 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.967 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.968 1+0 records in 00:07:12.968 1+0 records out 00:07:12.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000798886 s, 5.1 MB/s 00:07:12.968 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.968 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.968 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.968 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.968 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.968 19:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.968 19:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:12.968 19:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:13.230 /dev/nbd12 00:07:13.230 19:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:13.230 19:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:13.230 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:13.231 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:13.231 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.231 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.231 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:13.231 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:13.231 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.231 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.231 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.231 1+0 records in 00:07:13.231 1+0 records out 00:07:13.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345384 s, 11.9 MB/s 00:07:13.231 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.231 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:13.231 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.231 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.231 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:13.231 19:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.231 19:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:13.231 19:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:13.489 /dev/nbd13 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.489 1+0 records in 00:07:13.489 1+0 records out 00:07:13.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424873 s, 9.6 MB/s 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:13.489 /dev/nbd14 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.489 1+0 records in 00:07:13.489 1+0 records out 00:07:13.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558541 s, 7.3 MB/s 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.489 19:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.748 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:13.748 { 00:07:13.748 "nbd_device": "/dev/nbd0", 00:07:13.748 "bdev_name": "Nvme0n1" 00:07:13.748 }, 00:07:13.748 { 00:07:13.748 "nbd_device": "/dev/nbd1", 00:07:13.748 "bdev_name": "Nvme1n1p1" 00:07:13.748 }, 00:07:13.748 { 00:07:13.748 "nbd_device": "/dev/nbd10", 00:07:13.748 "bdev_name": "Nvme1n1p2" 00:07:13.748 }, 00:07:13.748 { 00:07:13.748 "nbd_device": "/dev/nbd11", 00:07:13.748 "bdev_name": "Nvme2n1" 00:07:13.748 }, 00:07:13.748 { 00:07:13.748 "nbd_device": "/dev/nbd12", 00:07:13.748 "bdev_name": "Nvme2n2" 00:07:13.748 }, 00:07:13.748 { 00:07:13.748 "nbd_device": "/dev/nbd13", 00:07:13.748 "bdev_name": "Nvme2n3" 00:07:13.748 }, 00:07:13.748 { 00:07:13.748 "nbd_device": "/dev/nbd14", 00:07:13.748 "bdev_name": "Nvme3n1" 00:07:13.748 } 00:07:13.748 ]' 00:07:13.748 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:13.748 { 00:07:13.748 "nbd_device": "/dev/nbd0", 00:07:13.748 "bdev_name": "Nvme0n1" 00:07:13.748 }, 00:07:13.748 { 00:07:13.748 "nbd_device": "/dev/nbd1", 00:07:13.748 "bdev_name": "Nvme1n1p1" 00:07:13.748 }, 00:07:13.748 { 00:07:13.748 "nbd_device": "/dev/nbd10", 00:07:13.748 "bdev_name": "Nvme1n1p2" 00:07:13.748 }, 00:07:13.748 { 00:07:13.748 "nbd_device": "/dev/nbd11", 00:07:13.748 "bdev_name": "Nvme2n1" 00:07:13.748 }, 00:07:13.748 { 00:07:13.748 "nbd_device": "/dev/nbd12", 00:07:13.748 "bdev_name": "Nvme2n2" 00:07:13.748 }, 00:07:13.748 { 00:07:13.748 "nbd_device": "/dev/nbd13", 00:07:13.748 "bdev_name": "Nvme2n3" 00:07:13.748 }, 00:07:13.748 { 00:07:13.748 "nbd_device": "/dev/nbd14", 00:07:13.748 "bdev_name": "Nvme3n1" 00:07:13.748 } 00:07:13.748 ]' 00:07:13.748 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.748 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:13.748 /dev/nbd1 00:07:13.748 /dev/nbd10 00:07:13.748 /dev/nbd11 00:07:13.748 /dev/nbd12 00:07:13.748 /dev/nbd13 00:07:13.748 /dev/nbd14' 00:07:13.748 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:13.748 /dev/nbd1 00:07:13.748 /dev/nbd10 00:07:13.748 /dev/nbd11 00:07:13.748 /dev/nbd12 00:07:13.748 /dev/nbd13 00:07:13.748 /dev/nbd14' 00:07:13.748 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.748 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:13.748 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:13.748 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:13.748 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:13.748 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:13.748 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:13.748 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:13.748 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:13.748 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:13.748 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:13.748 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:13.748 256+0 records in 00:07:13.748 256+0 records out 00:07:13.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00731245 s, 143 MB/s 00:07:13.748 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.748 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:14.008 256+0 records in 00:07:14.008 256+0 records out 00:07:14.008 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0850023 s, 12.3 MB/s 00:07:14.008 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:14.008 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:14.008 256+0 records in 00:07:14.008 256+0 records out 00:07:14.008 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0941987 s, 11.1 MB/s 00:07:14.008 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:14.008 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:14.268 256+0 records in 00:07:14.268 256+0 records out 00:07:14.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0904679 s, 11.6 MB/s 00:07:14.268 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:14.268 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:14.268 256+0 records in 00:07:14.268 256+0 records out 00:07:14.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0872692 s, 12.0 MB/s 00:07:14.268 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:14.268 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:14.268 256+0 records in 00:07:14.268 256+0 records out 00:07:14.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.091398 s, 11.5 MB/s 00:07:14.268 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:14.268 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:14.526 256+0 records in 00:07:14.526 256+0 records out 00:07:14.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0881901 s, 11.9 MB/s 00:07:14.526 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:14.526 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:14.526 256+0 records in 00:07:14.526 256+0 records out 00:07:14.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0881632 s, 11.9 MB/s 00:07:14.526 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:14.526 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:14.526 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:14.526 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:14.526 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:14.526 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:14.526 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:14.526 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.526 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:14.526 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.526 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:14.526 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.526 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:14.526 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.527 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:14.527 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.527 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:14.527 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.527 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:14.527 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.527 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:14.527 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:14.527 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:14.527 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.527 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:14.527 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:14.527 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:14.527 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.527 19:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:14.787 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:14.787 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:14.787 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:14.787 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.787 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.787 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:14.787 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.787 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.787 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.787 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:15.045 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:15.045 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:15.045 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:15.045 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.046 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.046 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:15.046 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.046 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.046 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.046 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:15.303 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:15.303 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:15.303 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:15.303 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.303 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.304 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:15.304 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.304 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.304 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.304 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:15.304 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:15.304 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:15.304 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:15.304 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.304 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.304 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:15.304 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.304 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.304 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.304 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:15.561 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:15.561 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:15.561 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:15.561 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.562 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.562 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:15.562 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.562 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.562 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.562 19:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:15.822 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:15.822 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:15.822 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:15.822 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.822 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.822 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:15.822 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.822 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.822 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.822 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:16.081 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:16.081 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:16.081 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:16.081 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.081 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.081 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:16.081 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.081 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.081 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:16.081 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.081 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:16.341 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:16.341 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:16.341 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:16.341 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:16.341 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:16.341 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:16.341 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:16.341 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:16.341 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:16.341 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:16.341 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:16.341 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:16.341 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:16.341 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.341 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:16.341 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:16.602 malloc_lvol_verify 00:07:16.602 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:16.862 b0b621c3-52bb-402e-bda2-45aef4ba0f22 00:07:16.862 19:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:16.862 25630447-0a81-4fc6-a9c9-345ea4b708a7 00:07:16.862 19:09:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:17.126 /dev/nbd0 00:07:17.126 19:09:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:17.126 19:09:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:17.126 19:09:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:17.126 19:09:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:17.126 19:09:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:17.126 mke2fs 1.47.0 (5-Feb-2023) 00:07:17.126 Discarding device blocks: 0/4096 done 00:07:17.126 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:17.126 00:07:17.126 Allocating group tables: 0/1 done 00:07:17.126 Writing inode tables: 0/1 done 00:07:17.126 Creating journal (1024 blocks): done 00:07:17.126 Writing superblocks and filesystem accounting information: 0/1 done 00:07:17.126 00:07:17.126 19:09:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:17.126 19:09:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.126 19:09:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:17.126 19:09:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:17.126 19:09:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:17.126 19:09:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.126 19:09:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:17.385 19:09:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:17.385 19:09:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:17.385 19:09:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:17.385 19:09:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.385 19:09:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.385 19:09:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:17.385 19:09:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.385 19:09:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.385 19:09:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63252 00:07:17.385 19:09:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63252 ']' 00:07:17.385 19:09:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63252 00:07:17.385 19:09:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:17.385 19:09:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.385 19:09:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63252 00:07:17.385 19:09:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.385 killing process with pid 63252 00:07:17.385 19:09:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.385 19:09:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63252' 00:07:17.385 19:09:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63252 00:07:17.385 19:09:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63252 00:07:18.320 19:09:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:18.320 00:07:18.320 real 0m10.754s 00:07:18.320 user 0m15.284s 00:07:18.320 sys 0m3.539s 00:07:18.320 ************************************ 00:07:18.320 END TEST bdev_nbd 00:07:18.320 ************************************ 00:07:18.320 19:09:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.320 19:09:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:18.320 19:09:02 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:18.320 19:09:02 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:18.320 19:09:02 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:18.320 skipping fio tests on NVMe due to multi-ns failures. 00:07:18.320 19:09:02 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:18.320 19:09:02 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:18.320 19:09:02 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:18.320 19:09:02 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:18.320 19:09:02 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.320 19:09:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.320 ************************************ 00:07:18.320 START TEST bdev_verify 00:07:18.320 ************************************ 00:07:18.320 19:09:02 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:18.320 [2024-12-16 19:09:02.565965] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:18.320 [2024-12-16 19:09:02.566087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63663 ] 00:07:18.581 [2024-12-16 19:09:02.726433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:18.581 [2024-12-16 19:09:02.844787] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.581 [2024-12-16 19:09:02.844942] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.152 Running I/O for 5 seconds... 00:07:21.475 20800.00 IOPS, 81.25 MiB/s [2024-12-16T19:09:06.772Z] 21120.00 IOPS, 82.50 MiB/s [2024-12-16T19:09:07.727Z] 21760.00 IOPS, 85.00 MiB/s [2024-12-16T19:09:08.672Z] 21776.00 IOPS, 85.06 MiB/s [2024-12-16T19:09:08.672Z] 21721.60 IOPS, 84.85 MiB/s 00:07:24.318 Latency(us) 00:07:24.318 [2024-12-16T19:09:08.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.318 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:24.318 Verification LBA range: start 0x0 length 0xbd0bd 00:07:24.318 Nvme0n1 : 5.05 1521.98 5.95 0.00 0.00 83768.61 18551.73 88322.36 00:07:24.318 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:24.318 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:24.318 Nvme0n1 : 5.05 1520.66 5.94 0.00 0.00 83826.29 17946.78 90742.15 00:07:24.318 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:24.318 Verification LBA range: start 0x0 length 0x4ff80 00:07:24.318 Nvme1n1p1 : 5.08 1525.86 5.96 0.00 0.00 83366.69 12502.25 80256.39 00:07:24.318 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:24.318 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:24.318 Nvme1n1p1 : 5.05 1520.18 5.94 0.00 0.00 83666.65 20064.10 80256.39 00:07:24.318 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:24.318 Verification LBA range: start 0x0 length 0x4ff7f 00:07:24.319 Nvme1n1p2 : 5.08 1525.42 5.96 0.00 0.00 83210.39 12502.25 71383.83 00:07:24.319 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:24.319 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:24.319 Nvme1n1p2 : 5.08 1525.19 5.96 0.00 0.00 83108.53 7561.85 72593.72 00:07:24.319 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:24.319 Verification LBA range: start 0x0 length 0x80000 00:07:24.319 Nvme2n1 : 5.08 1524.93 5.96 0.00 0.00 83092.95 12048.54 67350.84 00:07:24.319 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:24.319 Verification LBA range: start 0x80000 length 0x80000 00:07:24.319 Nvme2n1 : 5.09 1533.53 5.99 0.00 0.00 82598.60 10132.87 66544.25 00:07:24.319 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:24.319 Verification LBA range: start 0x0 length 0x80000 00:07:24.319 Nvme2n2 : 5.09 1532.89 5.99 0.00 0.00 82727.64 11393.18 67350.84 00:07:24.319 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:24.319 Verification LBA range: start 0x80000 length 0x80000 00:07:24.319 Nvme2n2 : 5.09 1533.09 5.99 0.00 0.00 82404.92 9779.99 68964.04 00:07:24.319 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:24.319 Verification LBA range: start 0x0 length 0x80000 00:07:24.319 Nvme2n3 : 5.10 1532.17 5.99 0.00 0.00 82590.89 12199.78 69770.63 00:07:24.319 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:24.319 Verification LBA range: start 0x80000 length 0x80000 00:07:24.319 Nvme2n3 : 5.09 1532.51 5.99 0.00 0.00 82289.01 10485.76 70577.23 00:07:24.319 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:24.319 Verification LBA range: start 0x0 length 0x20000 00:07:24.319 Nvme3n1 : 5.10 1531.46 5.98 0.00 0.00 82456.66 12905.55 73803.62 00:07:24.319 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:24.319 Verification LBA range: start 0x20000 length 0x20000 00:07:24.319 Nvme3n1 : 5.10 1531.74 5.98 0.00 0.00 82251.05 11090.71 72593.72 00:07:24.319 [2024-12-16T19:09:08.673Z] =================================================================================================================== 00:07:24.319 [2024-12-16T19:09:08.673Z] Total : 21391.60 83.56 0.00 0.00 82950.81 7561.85 90742.15 00:07:25.706 00:07:25.706 real 0m7.295s 00:07:25.706 user 0m13.629s 00:07:25.706 sys 0m0.231s 00:07:25.706 19:09:09 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.706 19:09:09 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:25.706 ************************************ 00:07:25.706 END TEST bdev_verify 00:07:25.706 ************************************ 00:07:25.706 19:09:09 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:25.706 19:09:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:25.706 19:09:09 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.706 19:09:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:25.706 ************************************ 00:07:25.706 START TEST bdev_verify_big_io 00:07:25.706 ************************************ 00:07:25.706 19:09:09 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:25.706 [2024-12-16 19:09:09.912151] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:25.706 [2024-12-16 19:09:09.912287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63761 ] 00:07:25.967 [2024-12-16 19:09:10.074367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:25.967 [2024-12-16 19:09:10.191509] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.967 [2024-12-16 19:09:10.191575] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.909 Running I/O for 5 seconds... 00:07:32.536 1994.00 IOPS, 124.62 MiB/s [2024-12-16T19:09:17.823Z] 2711.00 IOPS, 169.44 MiB/s [2024-12-16T19:09:18.081Z] 3379.67 IOPS, 211.23 MiB/s 00:07:33.727 Latency(us) 00:07:33.727 [2024-12-16T19:09:18.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.727 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:33.727 Verification LBA range: start 0x0 length 0xbd0b 00:07:33.727 Nvme0n1 : 5.80 106.70 6.67 0.00 0.00 1132694.11 17644.31 1303460.63 00:07:33.727 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:33.727 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:33.727 Nvme0n1 : 6.12 63.05 3.94 0.00 0.00 1848678.75 10989.88 2155226.98 00:07:33.727 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:33.727 Verification LBA range: start 0x0 length 0x4ff8 00:07:33.727 Nvme1n1p1 : 5.80 114.25 7.14 0.00 0.00 1037300.07 106470.79 1109877.37 00:07:33.727 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:33.727 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:33.727 Nvme1n1p1 : 6.19 77.83 4.86 0.00 0.00 1478080.29 38111.70 1806777.11 00:07:33.727 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:33.727 Verification LBA range: start 0x0 length 0x4ff7 00:07:33.727 Nvme1n1p2 : 5.81 106.97 6.69 0.00 0.00 1073118.11 108890.58 1858399.31 00:07:33.727 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:33.727 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:33.727 Nvme1n1p2 : 6.23 82.54 5.16 0.00 0.00 1314127.24 30650.68 1542213.32 00:07:33.727 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:33.727 Verification LBA range: start 0x0 length 0x8000 00:07:33.727 Nvme2n1 : 5.94 121.45 7.59 0.00 0.00 925998.72 42346.34 1400252.26 00:07:33.727 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:33.728 Verification LBA range: start 0x8000 length 0x8000 00:07:33.728 Nvme2n1 : 6.23 86.37 5.40 0.00 0.00 1182064.23 32465.53 1374441.16 00:07:33.728 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:33.728 Verification LBA range: start 0x0 length 0x8000 00:07:33.728 Nvme2n2 : 6.05 127.10 7.94 0.00 0.00 852740.51 65334.35 1122782.92 00:07:33.728 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:33.728 Verification LBA range: start 0x8000 length 0x8000 00:07:33.728 Nvme2n2 : 6.39 120.18 7.51 0.00 0.00 823965.34 18047.61 1400252.26 00:07:33.728 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:33.728 Verification LBA range: start 0x0 length 0x8000 00:07:33.728 Nvme2n3 : 6.09 136.15 8.51 0.00 0.00 776394.90 36498.51 1155046.79 00:07:33.728 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:33.728 Verification LBA range: start 0x8000 length 0x8000 00:07:33.728 Nvme2n3 : 6.62 183.11 11.44 0.00 0.00 515130.08 7662.67 1548666.09 00:07:33.728 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:33.728 Verification LBA range: start 0x0 length 0x2000 00:07:33.728 Nvme3n1 : 6.15 155.50 9.72 0.00 0.00 661134.48 630.15 1167952.34 00:07:33.728 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:33.728 Verification LBA range: start 0x2000 length 0x2000 00:07:33.728 Nvme3n1 : 6.91 337.19 21.07 0.00 0.00 267287.54 567.14 1587382.74 00:07:33.728 [2024-12-16T19:09:18.082Z] =================================================================================================================== 00:07:33.728 [2024-12-16T19:09:18.082Z] Total : 1818.38 113.65 0.00 0.00 812737.73 567.14 2155226.98 00:07:35.102 00:07:35.102 real 0m9.298s 00:07:35.102 user 0m17.625s 00:07:35.102 sys 0m0.264s 00:07:35.102 19:09:19 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.102 19:09:19 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:35.102 ************************************ 00:07:35.102 END TEST bdev_verify_big_io 00:07:35.102 ************************************ 00:07:35.102 19:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:35.102 19:09:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:35.102 19:09:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.102 19:09:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:35.102 ************************************ 00:07:35.102 START TEST bdev_write_zeroes 00:07:35.102 ************************************ 00:07:35.102 19:09:19 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:35.102 [2024-12-16 19:09:19.247338] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:35.102 [2024-12-16 19:09:19.247439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63881 ] 00:07:35.102 [2024-12-16 19:09:19.397467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.360 [2024-12-16 19:09:19.491146] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.925 Running I/O for 1 seconds... 00:07:36.858 65856.00 IOPS, 257.25 MiB/s 00:07:36.858 Latency(us) 00:07:36.858 [2024-12-16T19:09:21.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.858 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.859 Nvme0n1 : 1.03 9362.28 36.57 0.00 0.00 13645.26 10989.88 24399.56 00:07:36.859 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.859 Nvme1n1p1 : 1.03 9350.87 36.53 0.00 0.00 13642.37 10687.41 23895.43 00:07:36.859 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.859 Nvme1n1p2 : 1.03 9339.38 36.48 0.00 0.00 13618.51 10687.41 23088.84 00:07:36.859 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.859 Nvme2n1 : 1.03 9328.93 36.44 0.00 0.00 13600.94 10939.47 22483.89 00:07:36.859 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.859 Nvme2n2 : 1.03 9318.51 36.40 0.00 0.00 13587.72 10939.47 21979.77 00:07:36.859 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.859 Nvme2n3 : 1.03 9308.06 36.36 0.00 0.00 13575.51 10435.35 22988.01 00:07:36.859 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.859 Nvme3n1 : 1.03 9297.60 36.32 0.00 0.00 13551.19 9074.22 24601.21 00:07:36.859 [2024-12-16T19:09:21.213Z] =================================================================================================================== 00:07:36.859 [2024-12-16T19:09:21.213Z] Total : 65305.64 255.10 0.00 0.00 13603.07 9074.22 24601.21 00:07:37.794 00:07:37.794 real 0m2.640s 00:07:37.794 user 0m2.335s 00:07:37.794 sys 0m0.191s 00:07:37.794 19:09:21 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.794 19:09:21 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:37.794 ************************************ 00:07:37.794 END TEST bdev_write_zeroes 00:07:37.794 ************************************ 00:07:37.794 19:09:21 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:37.794 19:09:21 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:37.794 19:09:21 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.794 19:09:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.794 ************************************ 00:07:37.794 START TEST bdev_json_nonenclosed 00:07:37.794 ************************************ 00:07:37.794 19:09:21 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:37.794 [2024-12-16 19:09:21.933147] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:37.794 [2024-12-16 19:09:21.933285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63934 ] 00:07:37.794 [2024-12-16 19:09:22.093242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.052 [2024-12-16 19:09:22.195545] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.052 [2024-12-16 19:09:22.195627] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:38.052 [2024-12-16 19:09:22.195644] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:38.052 [2024-12-16 19:09:22.195653] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.052 00:07:38.052 real 0m0.514s 00:07:38.052 user 0m0.308s 00:07:38.052 sys 0m0.101s 00:07:38.052 19:09:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.052 19:09:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:38.052 ************************************ 00:07:38.052 END TEST bdev_json_nonenclosed 00:07:38.052 ************************************ 00:07:38.311 19:09:22 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.311 19:09:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:38.311 19:09:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.311 19:09:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.311 ************************************ 00:07:38.311 START TEST bdev_json_nonarray 00:07:38.311 ************************************ 00:07:38.311 19:09:22 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.311 [2024-12-16 19:09:22.489280] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:38.311 [2024-12-16 19:09:22.489399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63954 ] 00:07:38.311 [2024-12-16 19:09:22.645150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.568 [2024-12-16 19:09:22.748198] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.568 [2024-12-16 19:09:22.748280] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:38.568 [2024-12-16 19:09:22.748297] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:38.568 [2024-12-16 19:09:22.748305] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.827 00:07:38.827 real 0m0.506s 00:07:38.827 user 0m0.302s 00:07:38.827 sys 0m0.100s 00:07:38.827 19:09:22 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.827 19:09:22 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:38.827 ************************************ 00:07:38.827 END TEST bdev_json_nonarray 00:07:38.827 ************************************ 00:07:38.827 19:09:22 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:38.827 19:09:22 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:38.827 19:09:22 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:38.827 19:09:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.827 19:09:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.827 19:09:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.827 ************************************ 00:07:38.827 START TEST bdev_gpt_uuid 00:07:38.827 ************************************ 00:07:38.827 19:09:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:38.827 19:09:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:38.827 19:09:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:38.827 19:09:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63985 00:07:38.827 19:09:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:38.827 19:09:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63985 00:07:38.827 19:09:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63985 ']' 00:07:38.827 19:09:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.827 19:09:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.827 19:09:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:38.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.827 19:09:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.827 19:09:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.827 19:09:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:38.827 [2024-12-16 19:09:23.048782] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:38.827 [2024-12-16 19:09:23.048905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63985 ] 00:07:39.086 [2024-12-16 19:09:23.210351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.086 [2024-12-16 19:09:23.310137] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.652 19:09:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.653 19:09:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:39.653 19:09:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:39.653 19:09:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.653 19:09:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:39.911 Some configs were skipped because the RPC state that can call them passed over. 00:07:39.911 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.911 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:39.911 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.911 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:39.911 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.911 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:39.911 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.911 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:39.911 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.911 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:39.911 { 00:07:39.911 "name": "Nvme1n1p1", 00:07:39.911 "aliases": [ 00:07:39.911 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:39.911 ], 00:07:39.911 "product_name": "GPT Disk", 00:07:39.911 "block_size": 4096, 00:07:39.911 "num_blocks": 655104, 00:07:39.911 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:39.911 "assigned_rate_limits": { 00:07:39.911 "rw_ios_per_sec": 0, 00:07:39.911 "rw_mbytes_per_sec": 0, 00:07:39.911 "r_mbytes_per_sec": 0, 00:07:39.911 "w_mbytes_per_sec": 0 00:07:39.911 }, 00:07:39.911 "claimed": false, 00:07:39.911 "zoned": false, 00:07:39.911 "supported_io_types": { 00:07:39.911 "read": true, 00:07:39.911 "write": true, 00:07:39.911 "unmap": true, 00:07:39.911 "flush": true, 00:07:39.911 "reset": true, 00:07:39.911 "nvme_admin": false, 00:07:39.911 "nvme_io": false, 00:07:39.911 "nvme_io_md": false, 00:07:39.911 "write_zeroes": true, 00:07:39.911 "zcopy": false, 00:07:39.911 "get_zone_info": false, 00:07:39.911 "zone_management": false, 00:07:39.911 "zone_append": false, 00:07:39.911 "compare": true, 00:07:39.911 "compare_and_write": false, 00:07:39.911 "abort": true, 00:07:39.911 "seek_hole": false, 00:07:39.911 "seek_data": false, 00:07:39.911 "copy": true, 00:07:39.911 "nvme_iov_md": false 00:07:39.911 }, 00:07:39.911 "driver_specific": { 00:07:39.911 "gpt": { 00:07:39.911 "base_bdev": "Nvme1n1", 00:07:39.911 "offset_blocks": 256, 00:07:39.911 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:39.911 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:39.911 "partition_name": "SPDK_TEST_first" 00:07:39.911 } 00:07:39.911 } 00:07:39.911 } 00:07:39.911 ]' 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:40.189 { 00:07:40.189 "name": "Nvme1n1p2", 00:07:40.189 "aliases": [ 00:07:40.189 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:40.189 ], 00:07:40.189 "product_name": "GPT Disk", 00:07:40.189 "block_size": 4096, 00:07:40.189 "num_blocks": 655103, 00:07:40.189 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:40.189 "assigned_rate_limits": { 00:07:40.189 "rw_ios_per_sec": 0, 00:07:40.189 "rw_mbytes_per_sec": 0, 00:07:40.189 "r_mbytes_per_sec": 0, 00:07:40.189 "w_mbytes_per_sec": 0 00:07:40.189 }, 00:07:40.189 "claimed": false, 00:07:40.189 "zoned": false, 00:07:40.189 "supported_io_types": { 00:07:40.189 "read": true, 00:07:40.189 "write": true, 00:07:40.189 "unmap": true, 00:07:40.189 "flush": true, 00:07:40.189 "reset": true, 00:07:40.189 "nvme_admin": false, 00:07:40.189 "nvme_io": false, 00:07:40.189 "nvme_io_md": false, 00:07:40.189 "write_zeroes": true, 00:07:40.189 "zcopy": false, 00:07:40.189 "get_zone_info": false, 00:07:40.189 "zone_management": false, 00:07:40.189 "zone_append": false, 00:07:40.189 "compare": true, 00:07:40.189 "compare_and_write": false, 00:07:40.189 "abort": true, 00:07:40.189 "seek_hole": false, 00:07:40.189 "seek_data": false, 00:07:40.189 "copy": true, 00:07:40.189 "nvme_iov_md": false 00:07:40.189 }, 00:07:40.189 "driver_specific": { 00:07:40.189 "gpt": { 00:07:40.189 "base_bdev": "Nvme1n1", 00:07:40.189 "offset_blocks": 655360, 00:07:40.189 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:40.189 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:40.189 "partition_name": "SPDK_TEST_second" 00:07:40.189 } 00:07:40.189 } 00:07:40.189 } 00:07:40.189 ]' 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63985 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63985 ']' 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63985 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63985 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.189 killing process with pid 63985 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63985' 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63985 00:07:40.189 19:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63985 00:07:42.128 00:07:42.128 real 0m3.045s 00:07:42.128 user 0m3.195s 00:07:42.128 sys 0m0.373s 00:07:42.128 19:09:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.128 ************************************ 00:07:42.128 END TEST bdev_gpt_uuid 00:07:42.128 ************************************ 00:07:42.128 19:09:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:42.128 19:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:42.128 19:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:42.128 19:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:42.128 19:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:42.128 19:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:42.128 19:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:42.128 19:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:42.128 19:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:42.128 19:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:42.128 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:42.387 Waiting for block devices as requested 00:07:42.387 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:42.387 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:42.387 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:42.645 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:47.921 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:47.921 19:09:31 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:47.921 19:09:31 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:47.921 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:47.921 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:47.921 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:47.921 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:47.921 19:09:32 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:47.921 00:07:47.921 real 0m56.079s 00:07:47.921 user 1m12.253s 00:07:47.921 sys 0m7.823s 00:07:47.921 19:09:32 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.921 19:09:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.921 ************************************ 00:07:47.921 END TEST blockdev_nvme_gpt 00:07:47.921 ************************************ 00:07:47.921 19:09:32 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:47.921 19:09:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.921 19:09:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.921 19:09:32 -- common/autotest_common.sh@10 -- # set +x 00:07:47.921 ************************************ 00:07:47.921 START TEST nvme 00:07:47.921 ************************************ 00:07:47.921 19:09:32 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:47.921 * Looking for test storage... 00:07:47.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:47.921 19:09:32 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:47.921 19:09:32 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:47.921 19:09:32 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:48.180 19:09:32 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:48.180 19:09:32 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.180 19:09:32 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.180 19:09:32 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.180 19:09:32 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.180 19:09:32 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.180 19:09:32 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.180 19:09:32 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.180 19:09:32 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.180 19:09:32 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.180 19:09:32 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.180 19:09:32 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.180 19:09:32 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:48.180 19:09:32 nvme -- scripts/common.sh@345 -- # : 1 00:07:48.180 19:09:32 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.180 19:09:32 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.180 19:09:32 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:48.180 19:09:32 nvme -- scripts/common.sh@353 -- # local d=1 00:07:48.180 19:09:32 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.180 19:09:32 nvme -- scripts/common.sh@355 -- # echo 1 00:07:48.180 19:09:32 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:48.180 19:09:32 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:48.180 19:09:32 nvme -- scripts/common.sh@353 -- # local d=2 00:07:48.180 19:09:32 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.180 19:09:32 nvme -- scripts/common.sh@355 -- # echo 2 00:07:48.180 19:09:32 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:48.180 19:09:32 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:48.180 19:09:32 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:48.180 19:09:32 nvme -- scripts/common.sh@368 -- # return 0 00:07:48.180 19:09:32 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.180 19:09:32 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:48.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.180 --rc genhtml_branch_coverage=1 00:07:48.180 --rc genhtml_function_coverage=1 00:07:48.180 --rc genhtml_legend=1 00:07:48.180 --rc geninfo_all_blocks=1 00:07:48.180 --rc geninfo_unexecuted_blocks=1 00:07:48.180 00:07:48.180 ' 00:07:48.180 19:09:32 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:48.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.180 --rc genhtml_branch_coverage=1 00:07:48.180 --rc genhtml_function_coverage=1 00:07:48.180 --rc genhtml_legend=1 00:07:48.180 --rc geninfo_all_blocks=1 00:07:48.180 --rc geninfo_unexecuted_blocks=1 00:07:48.180 00:07:48.180 ' 00:07:48.180 19:09:32 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:48.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.180 --rc genhtml_branch_coverage=1 00:07:48.180 --rc genhtml_function_coverage=1 00:07:48.180 --rc genhtml_legend=1 00:07:48.180 --rc geninfo_all_blocks=1 00:07:48.181 --rc geninfo_unexecuted_blocks=1 00:07:48.181 00:07:48.181 ' 00:07:48.181 19:09:32 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:48.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.181 --rc genhtml_branch_coverage=1 00:07:48.181 --rc genhtml_function_coverage=1 00:07:48.181 --rc genhtml_legend=1 00:07:48.181 --rc geninfo_all_blocks=1 00:07:48.181 --rc geninfo_unexecuted_blocks=1 00:07:48.181 00:07:48.181 ' 00:07:48.181 19:09:32 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:48.440 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:49.007 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:49.007 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:49.007 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:49.007 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:49.007 19:09:33 nvme -- nvme/nvme.sh@79 -- # uname 00:07:49.007 19:09:33 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:49.007 19:09:33 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:49.007 19:09:33 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:49.007 19:09:33 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:49.007 19:09:33 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:49.007 19:09:33 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:49.007 19:09:33 nvme -- common/autotest_common.sh@1075 -- # stubpid=64620 00:07:49.007 Waiting for stub to ready for secondary processes... 00:07:49.007 19:09:33 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:49.007 19:09:33 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:49.007 19:09:33 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:49.007 19:09:33 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64620 ]] 00:07:49.007 19:09:33 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:49.007 [2024-12-16 19:09:33.317661] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:49.007 [2024-12-16 19:09:33.317784] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:49.942 [2024-12-16 19:09:34.095967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:49.942 [2024-12-16 19:09:34.192092] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.942 [2024-12-16 19:09:34.192282] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.942 [2024-12-16 19:09:34.192311] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.942 [2024-12-16 19:09:34.205649] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:49.942 [2024-12-16 19:09:34.205685] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:49.942 [2024-12-16 19:09:34.210313] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:49.942 [2024-12-16 19:09:34.210402] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:49.942 [2024-12-16 19:09:34.215136] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:49.942 [2024-12-16 19:09:34.216496] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:49.942 [2024-12-16 19:09:34.216547] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:49.942 [2024-12-16 19:09:34.217899] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:49.942 [2024-12-16 19:09:34.218076] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:49.942 [2024-12-16 19:09:34.218115] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:49.942 [2024-12-16 19:09:34.220057] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:49.942 [2024-12-16 19:09:34.220353] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:49.942 [2024-12-16 19:09:34.220405] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:49.942 [2024-12-16 19:09:34.220441] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:49.942 [2024-12-16 19:09:34.220475] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:49.942 done. 00:07:49.942 19:09:34 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:49.942 19:09:34 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:49.942 19:09:34 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:49.942 19:09:34 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:49.942 19:09:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.942 19:09:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.200 ************************************ 00:07:50.200 START TEST nvme_reset 00:07:50.200 ************************************ 00:07:50.200 19:09:34 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:50.200 Initializing NVMe Controllers 00:07:50.200 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:50.200 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:50.200 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:50.200 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:50.200 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:50.200 ************************************ 00:07:50.200 END TEST nvme_reset 00:07:50.200 ************************************ 00:07:50.200 00:07:50.200 real 0m0.230s 00:07:50.200 user 0m0.081s 00:07:50.200 sys 0m0.097s 00:07:50.200 19:09:34 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.200 19:09:34 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:50.461 19:09:34 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:50.461 19:09:34 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.461 19:09:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.461 19:09:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.461 ************************************ 00:07:50.461 START TEST nvme_identify 00:07:50.461 ************************************ 00:07:50.461 19:09:34 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:50.461 19:09:34 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:50.461 19:09:34 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:50.461 19:09:34 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:50.461 19:09:34 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:50.461 19:09:34 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:50.461 19:09:34 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:50.461 19:09:34 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:50.461 19:09:34 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:50.461 19:09:34 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:50.461 19:09:34 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:50.461 19:09:34 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:50.461 19:09:34 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:50.462 [2024-12-16 19:09:34.790289] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64642 terminated unexpected 00:07:50.462 ===================================================== 00:07:50.462 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:50.462 ===================================================== 00:07:50.462 Controller Capabilities/Features 00:07:50.462 ================================ 00:07:50.462 Vendor ID: 1b36 00:07:50.462 Subsystem Vendor ID: 1af4 00:07:50.462 Serial Number: 12340 00:07:50.462 Model Number: QEMU NVMe Ctrl 00:07:50.462 Firmware Version: 8.0.0 00:07:50.462 Recommended Arb Burst: 6 00:07:50.462 IEEE OUI Identifier: 00 54 52 00:07:50.462 Multi-path I/O 00:07:50.462 May have multiple subsystem ports: No 00:07:50.462 May have multiple controllers: No 00:07:50.462 Associated with SR-IOV VF: No 00:07:50.462 Max Data Transfer Size: 524288 00:07:50.462 Max Number of Namespaces: 256 00:07:50.462 Max Number of I/O Queues: 64 00:07:50.462 NVMe Specification Version (VS): 1.4 00:07:50.462 NVMe Specification Version (Identify): 1.4 00:07:50.462 Maximum Queue Entries: 2048 00:07:50.462 Contiguous Queues Required: Yes 00:07:50.462 Arbitration Mechanisms Supported 00:07:50.462 Weighted Round Robin: Not Supported 00:07:50.462 Vendor Specific: Not Supported 00:07:50.462 Reset Timeout: 7500 ms 00:07:50.462 Doorbell Stride: 4 bytes 00:07:50.462 NVM Subsystem Reset: Not Supported 00:07:50.462 Command Sets Supported 00:07:50.462 NVM Command Set: Supported 00:07:50.462 Boot Partition: Not Supported 00:07:50.462 Memory Page Size Minimum: 4096 bytes 00:07:50.462 Memory Page Size Maximum: 65536 bytes 00:07:50.462 Persistent Memory Region: Not Supported 00:07:50.462 Optional Asynchronous Events Supported 00:07:50.462 Namespace Attribute Notices: Supported 00:07:50.462 Firmware Activation Notices: Not Supported 00:07:50.462 ANA Change Notices: Not Supported 00:07:50.462 PLE Aggregate Log Change Notices: Not Supported 00:07:50.462 LBA Status Info Alert Notices: Not Supported 00:07:50.462 EGE Aggregate Log Change Notices: Not Supported 00:07:50.462 Normal NVM Subsystem Shutdown event: Not Supported 00:07:50.462 Zone Descriptor Change Notices: Not Supported 00:07:50.462 Discovery Log Change Notices: Not Supported 00:07:50.462 Controller Attributes 00:07:50.462 128-bit Host Identifier: Not Supported 00:07:50.462 Non-Operational Permissive Mode: Not Supported 00:07:50.462 NVM Sets: Not Supported 00:07:50.462 Read Recovery Levels: Not Supported 00:07:50.462 Endurance Groups: Not Supported 00:07:50.462 Predictable Latency Mode: Not Supported 00:07:50.462 Traffic Based Keep ALive: Not Supported 00:07:50.462 Namespace Granularity: Not Supported 00:07:50.462 SQ Associations: Not Supported 00:07:50.462 UUID List: Not Supported 00:07:50.462 Multi-Domain Subsystem: Not Supported 00:07:50.462 Fixed Capacity Management: Not Supported 00:07:50.462 Variable Capacity Management: Not Supported 00:07:50.462 Delete Endurance Group: Not Supported 00:07:50.462 Delete NVM Set: Not Supported 00:07:50.462 Extended LBA Formats Supported: Supported 00:07:50.462 Flexible Data Placement Supported: Not Supported 00:07:50.462 00:07:50.462 Controller Memory Buffer Support 00:07:50.462 ================================ 00:07:50.462 Supported: No 00:07:50.462 00:07:50.462 Persistent Memory Region Support 00:07:50.462 ================================ 00:07:50.462 Supported: No 00:07:50.462 00:07:50.462 Admin Command Set Attributes 00:07:50.462 ============================ 00:07:50.462 Security Send/Receive: Not Supported 00:07:50.462 Format NVM: Supported 00:07:50.462 Firmware Activate/Download: Not Supported 00:07:50.462 Namespace Management: Supported 00:07:50.462 Device Self-Test: Not Supported 00:07:50.462 Directives: Supported 00:07:50.462 NVMe-MI: Not Supported 00:07:50.462 Virtualization Management: Not Supported 00:07:50.462 Doorbell Buffer Config: Supported 00:07:50.462 Get LBA Status Capability: Not Supported 00:07:50.462 Command & Feature Lockdown Capability: Not Supported 00:07:50.462 Abort Command Limit: 4 00:07:50.462 Async Event Request Limit: 4 00:07:50.462 Number of Firmware Slots: N/A 00:07:50.462 Firmware Slot 1 Read-Only: N/A 00:07:50.462 Firmware Activation Without Reset: N/A 00:07:50.462 Multiple Update Detection Support: N/A 00:07:50.462 Firmware Update Granularity: No Information Provided 00:07:50.462 Per-Namespace SMART Log: Yes 00:07:50.462 Asymmetric Namespace Access Log Page: Not Supported 00:07:50.462 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:50.462 Command Effects Log Page: Supported 00:07:50.462 Get Log Page Extended Data: Supported 00:07:50.462 Telemetry Log Pages: Not Supported 00:07:50.462 Persistent Event Log Pages: Not Supported 00:07:50.462 Supported Log Pages Log Page: May Support 00:07:50.462 Commands Supported & Effects Log Page: Not Supported 00:07:50.462 Feature Identifiers & Effects Log Page:May Support 00:07:50.462 NVMe-MI Commands & Effects Log Page: May Support 00:07:50.462 Data Area 4 for Telemetry Log: Not Supported 00:07:50.462 Error Log Page Entries Supported: 1 00:07:50.462 Keep Alive: Not Supported 00:07:50.462 00:07:50.462 NVM Command Set Attributes 00:07:50.462 ========================== 00:07:50.462 Submission Queue Entry Size 00:07:50.462 Max: 64 00:07:50.462 Min: 64 00:07:50.462 Completion Queue Entry Size 00:07:50.462 Max: 16 00:07:50.462 Min: 16 00:07:50.462 Number of Namespaces: 256 00:07:50.462 Compare Command: Supported 00:07:50.462 Write Uncorrectable Command: Not Supported 00:07:50.462 Dataset Management Command: Supported 00:07:50.462 Write Zeroes Command: Supported 00:07:50.462 Set Features Save Field: Supported 00:07:50.462 Reservations: Not Supported 00:07:50.462 Timestamp: Supported 00:07:50.462 Copy: Supported 00:07:50.462 Volatile Write Cache: Present 00:07:50.462 Atomic Write Unit (Normal): 1 00:07:50.462 Atomic Write Unit (PFail): 1 00:07:50.462 Atomic Compare & Write Unit: 1 00:07:50.462 Fused Compare & Write: Not Supported 00:07:50.462 Scatter-Gather List 00:07:50.462 SGL Command Set: Supported 00:07:50.462 SGL Keyed: Not Supported 00:07:50.462 SGL Bit Bucket Descriptor: Not Supported 00:07:50.462 SGL Metadata Pointer: Not Supported 00:07:50.462 Oversized SGL: Not Supported 00:07:50.462 SGL Metadata Address: Not Supported 00:07:50.462 SGL Offset: Not Supported 00:07:50.462 Transport SGL Data Block: Not Supported 00:07:50.462 Replay Protected Memory Block: Not Supported 00:07:50.462 00:07:50.462 Firmware Slot Information 00:07:50.462 ========================= 00:07:50.462 Active slot: 1 00:07:50.462 Slot 1 Firmware Revision: 1.0 00:07:50.462 00:07:50.462 00:07:50.462 Commands Supported and Effects 00:07:50.462 ============================== 00:07:50.462 Admin Commands 00:07:50.462 -------------- 00:07:50.462 Delete I/O Submission Queue (00h): Supported 00:07:50.462 Create I/O Submission Queue (01h): Supported 00:07:50.462 Get Log Page (02h): Supported 00:07:50.462 Delete I/O Completion Queue (04h): Supported 00:07:50.462 Create I/O Completion Queue (05h): Supported 00:07:50.462 Identify (06h): Supported 00:07:50.462 Abort (08h): Supported 00:07:50.462 Set Features (09h): Supported 00:07:50.462 Get Features (0Ah): Supported 00:07:50.462 Asynchronous Event Request (0Ch): Supported 00:07:50.462 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:50.462 Directive Send (19h): Supported 00:07:50.462 Directive Receive (1Ah): Supported 00:07:50.462 Virtualization Management (1Ch): Supported 00:07:50.462 Doorbell Buffer Config (7Ch): Supported 00:07:50.462 Format NVM (80h): Supported LBA-Change 00:07:50.462 I/O Commands 00:07:50.462 ------------ 00:07:50.462 Flush (00h): Supported LBA-Change 00:07:50.462 Write (01h): Supported LBA-Change 00:07:50.462 Read (02h): Supported 00:07:50.462 Compare (05h): Supported 00:07:50.462 Write Zeroes (08h): Supported LBA-Change 00:07:50.462 Dataset Management (09h): Supported LBA-Change 00:07:50.462 Unknown (0Ch): Supported 00:07:50.462 Unknown (12h): Supported 00:07:50.462 Copy (19h): Supported LBA-Change 00:07:50.462 Unknown (1Dh): Supported LBA-Change 00:07:50.462 00:07:50.462 Error Log 00:07:50.462 ========= 00:07:50.462 00:07:50.462 Arbitration 00:07:50.462 =========== 00:07:50.462 Arbitration Burst: no limit 00:07:50.462 00:07:50.462 Power Management 00:07:50.462 ================ 00:07:50.462 Number of Power States: 1 00:07:50.462 Current Power State: Power State #0 00:07:50.462 Power State #0: 00:07:50.462 Max Power: 25.00 W 00:07:50.462 Non-Operational State: Operational 00:07:50.462 Entry Latency: 16 microseconds 00:07:50.462 Exit Latency: 4 microseconds 00:07:50.462 Relative Read Throughput: 0 00:07:50.462 Relative Read Latency: 0 00:07:50.462 Relative Write Throughput: 0 00:07:50.462 Relative Write Latency: 0 00:07:50.462 Idle Power[2024-12-16 19:09:34.791550] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64642 terminated unexpected 00:07:50.462 : Not Reported 00:07:50.462 Active Power: Not Reported 00:07:50.462 Non-Operational Permissive Mode: Not Supported 00:07:50.462 00:07:50.462 Health Information 00:07:50.462 ================== 00:07:50.462 Critical Warnings: 00:07:50.462 Available Spare Space: OK 00:07:50.462 Temperature: OK 00:07:50.462 Device Reliability: OK 00:07:50.462 Read Only: No 00:07:50.462 Volatile Memory Backup: OK 00:07:50.462 Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.462 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:50.462 Available Spare: 0% 00:07:50.462 Available Spare Threshold: 0% 00:07:50.462 Life Percentage Used: 0% 00:07:50.462 Data Units Read: 654 00:07:50.462 Data Units Written: 582 00:07:50.462 Host Read Commands: 39089 00:07:50.462 Host Write Commands: 38875 00:07:50.462 Controller Busy Time: 0 minutes 00:07:50.462 Power Cycles: 0 00:07:50.462 Power On Hours: 0 hours 00:07:50.462 Unsafe Shutdowns: 0 00:07:50.462 Unrecoverable Media Errors: 0 00:07:50.462 Lifetime Error Log Entries: 0 00:07:50.462 Warning Temperature Time: 0 minutes 00:07:50.462 Critical Temperature Time: 0 minutes 00:07:50.462 00:07:50.462 Number of Queues 00:07:50.462 ================ 00:07:50.462 Number of I/O Submission Queues: 64 00:07:50.462 Number of I/O Completion Queues: 64 00:07:50.462 00:07:50.462 ZNS Specific Controller Data 00:07:50.462 ============================ 00:07:50.462 Zone Append Size Limit: 0 00:07:50.462 00:07:50.462 00:07:50.462 Active Namespaces 00:07:50.462 ================= 00:07:50.462 Namespace ID:1 00:07:50.462 Error Recovery Timeout: Unlimited 00:07:50.462 Command Set Identifier: NVM (00h) 00:07:50.462 Deallocate: Supported 00:07:50.462 Deallocated/Unwritten Error: Supported 00:07:50.462 Deallocated Read Value: All 0x00 00:07:50.462 Deallocate in Write Zeroes: Not Supported 00:07:50.462 Deallocated Guard Field: 0xFFFF 00:07:50.462 Flush: Supported 00:07:50.462 Reservation: Not Supported 00:07:50.462 Metadata Transferred as: Separate Metadata Buffer 00:07:50.462 Namespace Sharing Capabilities: Private 00:07:50.462 Size (in LBAs): 1548666 (5GiB) 00:07:50.462 Capacity (in LBAs): 1548666 (5GiB) 00:07:50.462 Utilization (in LBAs): 1548666 (5GiB) 00:07:50.462 Thin Provisioning: Not Supported 00:07:50.462 Per-NS Atomic Units: No 00:07:50.462 Maximum Single Source Range Length: 128 00:07:50.462 Maximum Copy Length: 128 00:07:50.462 Maximum Source Range Count: 128 00:07:50.462 NGUID/EUI64 Never Reused: No 00:07:50.462 Namespace Write Protected: No 00:07:50.462 Number of LBA Formats: 8 00:07:50.462 Current LBA Format: LBA Format #07 00:07:50.462 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:50.462 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:50.463 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:50.463 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:50.463 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:50.463 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:50.463 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:50.463 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:50.463 00:07:50.463 NVM Specific Namespace Data 00:07:50.463 =========================== 00:07:50.463 Logical Block Storage Tag Mask: 0 00:07:50.463 Protection Information Capabilities: 00:07:50.463 16b Guard Protection Information Storage Tag Support: No 00:07:50.463 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:50.463 Storage Tag Check Read Support: No 00:07:50.463 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.463 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.463 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.463 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.463 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.463 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.463 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.463 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.463 ===================================================== 00:07:50.463 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:50.463 ===================================================== 00:07:50.463 Controller Capabilities/Features 00:07:50.463 ================================ 00:07:50.463 Vendor ID: 1b36 00:07:50.463 Subsystem Vendor ID: 1af4 00:07:50.463 Serial Number: 12341 00:07:50.463 Model Number: QEMU NVMe Ctrl 00:07:50.463 Firmware Version: 8.0.0 00:07:50.463 Recommended Arb Burst: 6 00:07:50.463 IEEE OUI Identifier: 00 54 52 00:07:50.463 Multi-path I/O 00:07:50.463 May have multiple subsystem ports: No 00:07:50.463 May have multiple controllers: No 00:07:50.463 Associated with SR-IOV VF: No 00:07:50.463 Max Data Transfer Size: 524288 00:07:50.463 Max Number of Namespaces: 256 00:07:50.463 Max Number of I/O Queues: 64 00:07:50.463 NVMe Specification Version (VS): 1.4 00:07:50.463 NVMe Specification Version (Identify): 1.4 00:07:50.463 Maximum Queue Entries: 2048 00:07:50.463 Contiguous Queues Required: Yes 00:07:50.463 Arbitration Mechanisms Supported 00:07:50.463 Weighted Round Robin: Not Supported 00:07:50.463 Vendor Specific: Not Supported 00:07:50.463 Reset Timeout: 7500 ms 00:07:50.463 Doorbell Stride: 4 bytes 00:07:50.463 NVM Subsystem Reset: Not Supported 00:07:50.463 Command Sets Supported 00:07:50.463 NVM Command Set: Supported 00:07:50.463 Boot Partition: Not Supported 00:07:50.463 Memory Page Size Minimum: 4096 bytes 00:07:50.463 Memory Page Size Maximum: 65536 bytes 00:07:50.463 Persistent Memory Region: Not Supported 00:07:50.463 Optional Asynchronous Events Supported 00:07:50.463 Namespace Attribute Notices: Supported 00:07:50.463 Firmware Activation Notices: Not Supported 00:07:50.463 ANA Change Notices: Not Supported 00:07:50.463 PLE Aggregate Log Change Notices: Not Supported 00:07:50.463 LBA Status Info Alert Notices: Not Supported 00:07:50.463 EGE Aggregate Log Change Notices: Not Supported 00:07:50.463 Normal NVM Subsystem Shutdown event: Not Supported 00:07:50.463 Zone Descriptor Change Notices: Not Supported 00:07:50.463 Discovery Log Change Notices: Not Supported 00:07:50.463 Controller Attributes 00:07:50.463 128-bit Host Identifier: Not Supported 00:07:50.463 Non-Operational Permissive Mode: Not Supported 00:07:50.463 NVM Sets: Not Supported 00:07:50.463 Read Recovery Levels: Not Supported 00:07:50.463 Endurance Groups: Not Supported 00:07:50.463 Predictable Latency Mode: Not Supported 00:07:50.463 Traffic Based Keep ALive: Not Supported 00:07:50.463 Namespace Granularity: Not Supported 00:07:50.463 SQ Associations: Not Supported 00:07:50.463 UUID List: Not Supported 00:07:50.463 Multi-Domain Subsystem: Not Supported 00:07:50.463 Fixed Capacity Management: Not Supported 00:07:50.463 Variable Capacity Management: Not Supported 00:07:50.463 Delete Endurance Group: Not Supported 00:07:50.463 Delete NVM Set: Not Supported 00:07:50.463 Extended LBA Formats Supported: Supported 00:07:50.463 Flexible Data Placement Supported: Not Supported 00:07:50.463 00:07:50.463 Controller Memory Buffer Support 00:07:50.463 ================================ 00:07:50.463 Supported: No 00:07:50.463 00:07:50.463 Persistent Memory Region Support 00:07:50.463 ================================ 00:07:50.463 Supported: No 00:07:50.463 00:07:50.463 Admin Command Set Attributes 00:07:50.463 ============================ 00:07:50.463 Security Send/Receive: Not Supported 00:07:50.463 Format NVM: Supported 00:07:50.463 Firmware Activate/Download: Not Supported 00:07:50.463 Namespace Management: Supported 00:07:50.463 Device Self-Test: Not Supported 00:07:50.463 Directives: Supported 00:07:50.463 NVMe-MI: Not Supported 00:07:50.463 Virtualization Management: Not Supported 00:07:50.463 Doorbell Buffer Config: Supported 00:07:50.463 Get LBA Status Capability: Not Supported 00:07:50.463 Command & Feature Lockdown Capability: Not Supported 00:07:50.463 Abort Command Limit: 4 00:07:50.463 Async Event Request Limit: 4 00:07:50.463 Number of Firmware Slots: N/A 00:07:50.463 Firmware Slot 1 Read-Only: N/A 00:07:50.463 Firmware Activation Without Reset: N/A 00:07:50.463 Multiple Update Detection Support: N/A 00:07:50.463 Firmware Update Granularity: No Information Provided 00:07:50.463 Per-Namespace SMART Log: Yes 00:07:50.463 Asymmetric Namespace Access Log Page: Not Supported 00:07:50.463 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:50.463 Command Effects Log Page: Supported 00:07:50.463 Get Log Page Extended Data: Supported 00:07:50.463 Telemetry Log Pages: Not Supported 00:07:50.463 Persistent Event Log Pages: Not Supported 00:07:50.463 Supported Log Pages Log Page: May Support 00:07:50.463 Commands Supported & Effects Log Page: Not Supported 00:07:50.463 Feature Identifiers & Effects Log Page:May Support 00:07:50.463 NVMe-MI Commands & Effects Log Page: May Support 00:07:50.463 Data Area 4 for Telemetry Log: Not Supported 00:07:50.463 Error Log Page Entries Supported: 1 00:07:50.463 Keep Alive: Not Supported 00:07:50.463 00:07:50.463 NVM Command Set Attributes 00:07:50.463 ========================== 00:07:50.463 Submission Queue Entry Size 00:07:50.463 Max: 64 00:07:50.463 Min: 64 00:07:50.463 Completion Queue Entry Size 00:07:50.463 Max: 16 00:07:50.463 Min: 16 00:07:50.463 Number of Namespaces: 256 00:07:50.463 Compare Command: Supported 00:07:50.463 Write Uncorrectable Command: Not Supported 00:07:50.463 Dataset Management Command: Supported 00:07:50.463 Write Zeroes Command: Supported 00:07:50.463 Set Features Save Field: Supported 00:07:50.463 Reservations: Not Supported 00:07:50.463 Timestamp: Supported 00:07:50.463 Copy: Supported 00:07:50.463 Volatile Write Cache: Present 00:07:50.463 Atomic Write Unit (Normal): 1 00:07:50.463 Atomic Write Unit (PFail): 1 00:07:50.463 Atomic Compare & Write Unit: 1 00:07:50.463 Fused Compare & Write: Not Supported 00:07:50.463 Scatter-Gather List 00:07:50.463 SGL Command Set: Supported 00:07:50.463 SGL Keyed: Not Supported 00:07:50.463 SGL Bit Bucket Descriptor: Not Supported 00:07:50.463 SGL Metadata Pointer: Not Supported 00:07:50.463 Oversized SGL: Not Supported 00:07:50.463 SGL Metadata Address: Not Supported 00:07:50.463 SGL Offset: Not Supported 00:07:50.463 Transport SGL Data Block: Not Supported 00:07:50.463 Replay Protected Memory Block: Not Supported 00:07:50.463 00:07:50.463 Firmware Slot Information 00:07:50.463 ========================= 00:07:50.463 Active slot: 1 00:07:50.463 Slot 1 Firmware Revision: 1.0 00:07:50.463 00:07:50.463 00:07:50.463 Commands Supported and Effects 00:07:50.463 ============================== 00:07:50.463 Admin Commands 00:07:50.463 -------------- 00:07:50.463 Delete I/O Submission Queue (00h): Supported 00:07:50.463 Create I/O Submission Queue (01h): Supported 00:07:50.463 Get Log Page (02h): Supported 00:07:50.463 Delete I/O Completion Queue (04h): Supported 00:07:50.463 Create I/O Completion Queue (05h): Supported 00:07:50.463 Identify (06h): Supported 00:07:50.463 Abort (08h): Supported 00:07:50.463 Set Features (09h): Supported 00:07:50.463 Get Features (0Ah): Supported 00:07:50.463 Asynchronous Event Request (0Ch): Supported 00:07:50.463 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:50.463 Directive Send (19h): Supported 00:07:50.463 Directive Receive (1Ah): Supported 00:07:50.463 Virtualization Management (1Ch): Supported 00:07:50.463 Doorbell Buffer Config (7Ch): Supported 00:07:50.463 Format NVM (80h): Supported LBA-Change 00:07:50.463 I/O Commands 00:07:50.463 ------------ 00:07:50.463 Flush (00h): Supported LBA-Change 00:07:50.463 Write (01h): Supported LBA-Change 00:07:50.463 Read (02h): Supported 00:07:50.463 Compare (05h): Supported 00:07:50.463 Write Zeroes (08h): Supported LBA-Change 00:07:50.463 Dataset Management (09h): Supported LBA-Change 00:07:50.463 Unknown (0Ch): Supported 00:07:50.463 Unknown (12h): Supported 00:07:50.463 Copy (19h): Supported LBA-Change 00:07:50.464 Unknown (1Dh): Supported LBA-Change 00:07:50.464 00:07:50.464 Error Log 00:07:50.464 ========= 00:07:50.464 00:07:50.464 Arbitration 00:07:50.464 =========== 00:07:50.464 Arbitration Burst: no limit 00:07:50.464 00:07:50.464 Power Management 00:07:50.464 ================ 00:07:50.464 Number of Power States: 1 00:07:50.464 Current Power State: Power State #0 00:07:50.464 Power State #0: 00:07:50.464 Max Power: 25.00 W 00:07:50.464 Non-Operational State: Operational 00:07:50.464 Entry Latency: 16 microseconds 00:07:50.464 Exit Latency: 4 microseconds 00:07:50.464 Relative Read Throughput: 0 00:07:50.464 Relative Read Latency: 0 00:07:50.464 Relative Write Throughput: 0 00:07:50.464 Relative Write Latency: 0 00:07:50.464 Idle Power: Not Reported 00:07:50.464 Active Power: Not Reported 00:07:50.464 Non-Operational Permissive Mode: Not Supported 00:07:50.464 00:07:50.464 Health Information 00:07:50.464 ================== 00:07:50.464 Critical Warnings: 00:07:50.464 Available Spare Space: OK 00:07:50.464 Temperature: [2024-12-16 19:09:34.792325] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64642 terminated unexpected 00:07:50.464 OK 00:07:50.464 Device Reliability: OK 00:07:50.464 Read Only: No 00:07:50.464 Volatile Memory Backup: OK 00:07:50.464 Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.464 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:50.464 Available Spare: 0% 00:07:50.464 Available Spare Threshold: 0% 00:07:50.464 Life Percentage Used: 0% 00:07:50.464 Data Units Read: 1001 00:07:50.464 Data Units Written: 875 00:07:50.464 Host Read Commands: 56838 00:07:50.464 Host Write Commands: 55744 00:07:50.464 Controller Busy Time: 0 minutes 00:07:50.464 Power Cycles: 0 00:07:50.464 Power On Hours: 0 hours 00:07:50.464 Unsafe Shutdowns: 0 00:07:50.464 Unrecoverable Media Errors: 0 00:07:50.464 Lifetime Error Log Entries: 0 00:07:50.464 Warning Temperature Time: 0 minutes 00:07:50.464 Critical Temperature Time: 0 minutes 00:07:50.464 00:07:50.464 Number of Queues 00:07:50.464 ================ 00:07:50.464 Number of I/O Submission Queues: 64 00:07:50.464 Number of I/O Completion Queues: 64 00:07:50.464 00:07:50.464 ZNS Specific Controller Data 00:07:50.464 ============================ 00:07:50.464 Zone Append Size Limit: 0 00:07:50.464 00:07:50.464 00:07:50.464 Active Namespaces 00:07:50.464 ================= 00:07:50.464 Namespace ID:1 00:07:50.464 Error Recovery Timeout: Unlimited 00:07:50.464 Command Set Identifier: NVM (00h) 00:07:50.464 Deallocate: Supported 00:07:50.464 Deallocated/Unwritten Error: Supported 00:07:50.464 Deallocated Read Value: All 0x00 00:07:50.464 Deallocate in Write Zeroes: Not Supported 00:07:50.464 Deallocated Guard Field: 0xFFFF 00:07:50.464 Flush: Supported 00:07:50.464 Reservation: Not Supported 00:07:50.464 Namespace Sharing Capabilities: Private 00:07:50.464 Size (in LBAs): 1310720 (5GiB) 00:07:50.464 Capacity (in LBAs): 1310720 (5GiB) 00:07:50.464 Utilization (in LBAs): 1310720 (5GiB) 00:07:50.464 Thin Provisioning: Not Supported 00:07:50.464 Per-NS Atomic Units: No 00:07:50.464 Maximum Single Source Range Length: 128 00:07:50.464 Maximum Copy Length: 128 00:07:50.464 Maximum Source Range Count: 128 00:07:50.464 NGUID/EUI64 Never Reused: No 00:07:50.464 Namespace Write Protected: No 00:07:50.464 Number of LBA Formats: 8 00:07:50.464 Current LBA Format: LBA Format #04 00:07:50.464 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:50.464 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:50.464 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:50.464 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:50.464 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:50.464 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:50.464 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:50.464 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:50.464 00:07:50.464 NVM Specific Namespace Data 00:07:50.464 =========================== 00:07:50.464 Logical Block Storage Tag Mask: 0 00:07:50.464 Protection Information Capabilities: 00:07:50.464 16b Guard Protection Information Storage Tag Support: No 00:07:50.464 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:50.464 Storage Tag Check Read Support: No 00:07:50.464 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.464 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.464 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.464 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.464 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.464 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.464 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.464 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.464 ===================================================== 00:07:50.464 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:50.464 ===================================================== 00:07:50.464 Controller Capabilities/Features 00:07:50.464 ================================ 00:07:50.464 Vendor ID: 1b36 00:07:50.464 Subsystem Vendor ID: 1af4 00:07:50.464 Serial Number: 12343 00:07:50.464 Model Number: QEMU NVMe Ctrl 00:07:50.464 Firmware Version: 8.0.0 00:07:50.464 Recommended Arb Burst: 6 00:07:50.464 IEEE OUI Identifier: 00 54 52 00:07:50.464 Multi-path I/O 00:07:50.464 May have multiple subsystem ports: No 00:07:50.464 May have multiple controllers: Yes 00:07:50.464 Associated with SR-IOV VF: No 00:07:50.464 Max Data Transfer Size: 524288 00:07:50.464 Max Number of Namespaces: 256 00:07:50.464 Max Number of I/O Queues: 64 00:07:50.464 NVMe Specification Version (VS): 1.4 00:07:50.464 NVMe Specification Version (Identify): 1.4 00:07:50.464 Maximum Queue Entries: 2048 00:07:50.464 Contiguous Queues Required: Yes 00:07:50.464 Arbitration Mechanisms Supported 00:07:50.464 Weighted Round Robin: Not Supported 00:07:50.464 Vendor Specific: Not Supported 00:07:50.464 Reset Timeout: 7500 ms 00:07:50.464 Doorbell Stride: 4 bytes 00:07:50.464 NVM Subsystem Reset: Not Supported 00:07:50.464 Command Sets Supported 00:07:50.464 NVM Command Set: Supported 00:07:50.464 Boot Partition: Not Supported 00:07:50.464 Memory Page Size Minimum: 4096 bytes 00:07:50.464 Memory Page Size Maximum: 65536 bytes 00:07:50.464 Persistent Memory Region: Not Supported 00:07:50.464 Optional Asynchronous Events Supported 00:07:50.464 Namespace Attribute Notices: Supported 00:07:50.464 Firmware Activation Notices: Not Supported 00:07:50.464 ANA Change Notices: Not Supported 00:07:50.464 PLE Aggregate Log Change Notices: Not Supported 00:07:50.464 LBA Status Info Alert Notices: Not Supported 00:07:50.464 EGE Aggregate Log Change Notices: Not Supported 00:07:50.464 Normal NVM Subsystem Shutdown event: Not Supported 00:07:50.464 Zone Descriptor Change Notices: Not Supported 00:07:50.464 Discovery Log Change Notices: Not Supported 00:07:50.464 Controller Attributes 00:07:50.464 128-bit Host Identifier: Not Supported 00:07:50.464 Non-Operational Permissive Mode: Not Supported 00:07:50.464 NVM Sets: Not Supported 00:07:50.464 Read Recovery Levels: Not Supported 00:07:50.464 Endurance Groups: Supported 00:07:50.464 Predictable Latency Mode: Not Supported 00:07:50.464 Traffic Based Keep ALive: Not Supported 00:07:50.464 Namespace Granularity: Not Supported 00:07:50.464 SQ Associations: Not Supported 00:07:50.464 UUID List: Not Supported 00:07:50.464 Multi-Domain Subsystem: Not Supported 00:07:50.464 Fixed Capacity Management: Not Supported 00:07:50.464 Variable Capacity Management: Not Supported 00:07:50.464 Delete Endurance Group: Not Supported 00:07:50.464 Delete NVM Set: Not Supported 00:07:50.464 Extended LBA Formats Supported: Supported 00:07:50.464 Flexible Data Placement Supported: Supported 00:07:50.464 00:07:50.464 Controller Memory Buffer Support 00:07:50.464 ================================ 00:07:50.464 Supported: No 00:07:50.464 00:07:50.464 Persistent Memory Region Support 00:07:50.464 ================================ 00:07:50.464 Supported: No 00:07:50.464 00:07:50.464 Admin Command Set Attributes 00:07:50.464 ============================ 00:07:50.464 Security Send/Receive: Not Supported 00:07:50.464 Format NVM: Supported 00:07:50.464 Firmware Activate/Download: Not Supported 00:07:50.464 Namespace Management: Supported 00:07:50.464 Device Self-Test: Not Supported 00:07:50.464 Directives: Supported 00:07:50.464 NVMe-MI: Not Supported 00:07:50.464 Virtualization Management: Not Supported 00:07:50.464 Doorbell Buffer Config: Supported 00:07:50.464 Get LBA Status Capability: Not Supported 00:07:50.464 Command & Feature Lockdown Capability: Not Supported 00:07:50.464 Abort Command Limit: 4 00:07:50.464 Async Event Request Limit: 4 00:07:50.464 Number of Firmware Slots: N/A 00:07:50.464 Firmware Slot 1 Read-Only: N/A 00:07:50.464 Firmware Activation Without Reset: N/A 00:07:50.464 Multiple Update Detection Support: N/A 00:07:50.464 Firmware Update Granularity: No Information Provided 00:07:50.464 Per-Namespace SMART Log: Yes 00:07:50.465 Asymmetric Namespace Access Log Page: Not Supported 00:07:50.465 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:50.465 Command Effects Log Page: Supported 00:07:50.465 Get Log Page Extended Data: Supported 00:07:50.465 Telemetry Log Pages: Not Supported 00:07:50.465 Persistent Event Log Pages: Not Supported 00:07:50.465 Supported Log Pages Log Page: May Support 00:07:50.465 Commands Supported & Effects Log Page: Not Supported 00:07:50.465 Feature Identifiers & Effects Log Page:May Support 00:07:50.465 NVMe-MI Commands & Effects Log Page: May Support 00:07:50.465 Data Area 4 for Telemetry Log: Not Supported 00:07:50.465 Error Log Page Entries Supported: 1 00:07:50.465 Keep Alive: Not Supported 00:07:50.465 00:07:50.465 NVM Command Set Attributes 00:07:50.465 ========================== 00:07:50.465 Submission Queue Entry Size 00:07:50.465 Max: 64 00:07:50.465 Min: 64 00:07:50.465 Completion Queue Entry Size 00:07:50.465 Max: 16 00:07:50.465 Min: 16 00:07:50.465 Number of Namespaces: 256 00:07:50.465 Compare Command: Supported 00:07:50.465 Write Uncorrectable Command: Not Supported 00:07:50.465 Dataset Management Command: Supported 00:07:50.465 Write Zeroes Command: Supported 00:07:50.465 Set Features Save Field: Supported 00:07:50.465 Reservations: Not Supported 00:07:50.465 Timestamp: Supported 00:07:50.465 Copy: Supported 00:07:50.465 Volatile Write Cache: Present 00:07:50.465 Atomic Write Unit (Normal): 1 00:07:50.465 Atomic Write Unit (PFail): 1 00:07:50.465 Atomic Compare & Write Unit: 1 00:07:50.465 Fused Compare & Write: Not Supported 00:07:50.465 Scatter-Gather List 00:07:50.465 SGL Command Set: Supported 00:07:50.465 SGL Keyed: Not Supported 00:07:50.465 SGL Bit Bucket Descriptor: Not Supported 00:07:50.465 SGL Metadata Pointer: Not Supported 00:07:50.465 Oversized SGL: Not Supported 00:07:50.465 SGL Metadata Address: Not Supported 00:07:50.465 SGL Offset: Not Supported 00:07:50.465 Transport SGL Data Block: Not Supported 00:07:50.465 Replay Protected Memory Block: Not Supported 00:07:50.465 00:07:50.465 Firmware Slot Information 00:07:50.465 ========================= 00:07:50.465 Active slot: 1 00:07:50.465 Slot 1 Firmware Revision: 1.0 00:07:50.465 00:07:50.465 00:07:50.465 Commands Supported and Effects 00:07:50.465 ============================== 00:07:50.465 Admin Commands 00:07:50.465 -------------- 00:07:50.465 Delete I/O Submission Queue (00h): Supported 00:07:50.465 Create I/O Submission Queue (01h): Supported 00:07:50.465 Get Log Page (02h): Supported 00:07:50.465 Delete I/O Completion Queue (04h): Supported 00:07:50.465 Create I/O Completion Queue (05h): Supported 00:07:50.465 Identify (06h): Supported 00:07:50.465 Abort (08h): Supported 00:07:50.465 Set Features (09h): Supported 00:07:50.465 Get Features (0Ah): Supported 00:07:50.465 Asynchronous Event Request (0Ch): Supported 00:07:50.465 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:50.465 Directive Send (19h): Supported 00:07:50.465 Directive Receive (1Ah): Supported 00:07:50.465 Virtualization Management (1Ch): Supported 00:07:50.465 Doorbell Buffer Config (7Ch): Supported 00:07:50.465 Format NVM (80h): Supported LBA-Change 00:07:50.465 I/O Commands 00:07:50.465 ------------ 00:07:50.465 Flush (00h): Supported LBA-Change 00:07:50.465 Write (01h): Supported LBA-Change 00:07:50.465 Read (02h): Supported 00:07:50.465 Compare (05h): Supported 00:07:50.465 Write Zeroes (08h): Supported LBA-Change 00:07:50.465 Dataset Management (09h): Supported LBA-Change 00:07:50.465 Unknown (0Ch): Supported 00:07:50.465 Unknown (12h): Supported 00:07:50.465 Copy (19h): Supported LBA-Change 00:07:50.465 Unknown (1Dh): Supported LBA-Change 00:07:50.465 00:07:50.465 Error Log 00:07:50.465 ========= 00:07:50.465 00:07:50.465 Arbitration 00:07:50.465 =========== 00:07:50.465 Arbitration Burst: no limit 00:07:50.465 00:07:50.465 Power Management 00:07:50.465 ================ 00:07:50.465 Number of Power States: 1 00:07:50.465 Current Power State: Power State #0 00:07:50.465 Power State #0: 00:07:50.465 Max Power: 25.00 W 00:07:50.465 Non-Operational State: Operational 00:07:50.465 Entry Latency: 16 microseconds 00:07:50.465 Exit Latency: 4 microseconds 00:07:50.465 Relative Read Throughput: 0 00:07:50.465 Relative Read Latency: 0 00:07:50.465 Relative Write Throughput: 0 00:07:50.465 Relative Write Latency: 0 00:07:50.465 Idle Power: Not Reported 00:07:50.465 Active Power: Not Reported 00:07:50.465 Non-Operational Permissive Mode: Not Supported 00:07:50.465 00:07:50.465 Health Information 00:07:50.465 ================== 00:07:50.465 Critical Warnings: 00:07:50.465 Available Spare Space: OK 00:07:50.465 Temperature: OK 00:07:50.465 Device Reliability: OK 00:07:50.465 Read Only: No 00:07:50.465 Volatile Memory Backup: OK 00:07:50.465 Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.465 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:50.465 Available Spare: 0% 00:07:50.465 Available Spare Threshold: 0% 00:07:50.465 Life Percentage Used: 0% 00:07:50.465 Data Units Read: 1137 00:07:50.465 Data Units Written: 1066 00:07:50.465 Host Read Commands: 43279 00:07:50.465 Host Write Commands: 42702 00:07:50.465 Controller Busy Time: 0 minutes 00:07:50.465 Power Cycles: 0 00:07:50.465 Power On Hours: 0 hours 00:07:50.465 Unsafe Shutdowns: 0 00:07:50.465 Unrecoverable Media Errors: 0 00:07:50.465 Lifetime Error Log Entries: 0 00:07:50.465 Warning Temperature Time: 0 minutes 00:07:50.465 Critical Temperature Time: 0 minutes 00:07:50.465 00:07:50.465 Number of Queues 00:07:50.465 ================ 00:07:50.465 Number of I/O Submission Queues: 64 00:07:50.465 Number of I/O Completion Queues: 64 00:07:50.465 00:07:50.465 ZNS Specific Controller Data 00:07:50.465 ============================ 00:07:50.465 Zone Append Size Limit: 0 00:07:50.465 00:07:50.465 00:07:50.465 Active Namespaces 00:07:50.465 ================= 00:07:50.465 Namespace ID:1 00:07:50.465 Error Recovery Timeout: Unlimited 00:07:50.465 Command Set Identifier: NVM (00h) 00:07:50.465 Deallocate: Supported 00:07:50.465 Deallocated/Unwritten Error: Supported 00:07:50.465 Deallocated Read Value: All 0x00 00:07:50.465 Deallocate in Write Zeroes: Not Supported 00:07:50.465 Deallocated Guard Field: 0xFFFF 00:07:50.465 Flush: Supported 00:07:50.465 Reservation: Not Supported 00:07:50.465 Namespace Sharing Capabilities: Multiple Controllers 00:07:50.465 Size (in LBAs): 262144 (1GiB) 00:07:50.465 Capacity (in LBAs): 262144 (1GiB) 00:07:50.465 Utilization (in LBAs): 262144 (1GiB) 00:07:50.465 Thin Provisioning: Not Supported 00:07:50.465 Per-NS Atomic Units: No 00:07:50.465 Maximum Single Source Range Length: 128 00:07:50.465 Maximum Copy Length: 128 00:07:50.465 Maximum Source Range Count: 128 00:07:50.465 NGUID/EUI64 Never Reused: No 00:07:50.465 Namespace Write Protected: No 00:07:50.465 Endurance group ID: 1 00:07:50.465 Number of LBA Formats: 8 00:07:50.465 Current LBA Format: LBA Format #04 00:07:50.465 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:50.465 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:50.465 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:50.465 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:50.465 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:50.465 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:50.465 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:50.465 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:50.465 00:07:50.465 Get Feature FDP: 00:07:50.465 ================ 00:07:50.465 Enabled: Yes 00:07:50.465 FDP configuration index: 0 00:07:50.465 00:07:50.465 FDP configurations log page 00:07:50.465 =========================== 00:07:50.465 Number of FDP configurations: 1 00:07:50.465 Version: 0 00:07:50.465 Size: 112 00:07:50.465 FDP Configuration Descriptor: 0 00:07:50.465 Descriptor Size: 96 00:07:50.465 Reclaim Group Identifier format: 2 00:07:50.465 FDP Volatile Write Cache: Not Present 00:07:50.465 FDP Configuration: Valid 00:07:50.465 Vendor Specific Size: 0 00:07:50.465 Number of Reclaim Groups: 2 00:07:50.465 Number of Recalim Unit Handles: 8 00:07:50.465 Max Placement Identifiers: 128 00:07:50.465 Number of Namespaces Suppprted: 256 00:07:50.465 Reclaim unit Nominal Size: 6000000 bytes 00:07:50.465 Estimated Reclaim Unit Time Limit: Not Reported 00:07:50.465 RUH Desc #000: RUH Type: Initially Isolated 00:07:50.465 RUH Desc #001: RUH Type: Initially Isolated 00:07:50.465 RUH Desc #002: RUH Type: Initially Isolated 00:07:50.465 RUH Desc #003: RUH Type: Initially Isolated 00:07:50.465 RUH Desc #004: RUH Type: Initially Isolated 00:07:50.465 RUH Desc #005: RUH Type: Initially Isolated 00:07:50.465 RUH Desc #006: RUH Type: Initially Isolated 00:07:50.465 RUH Desc #007: RUH Type: Initially Isolated 00:07:50.465 00:07:50.465 FDP reclaim unit handle usage log page 00:07:50.465 ====================================== 00:07:50.465 Number of Reclaim Unit Handles: 8 00:07:50.465 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:50.465 RUH Usage Desc #001: RUH Attributes: Unused 00:07:50.465 RUH Usage Desc #002: RUH Attributes: Unused 00:07:50.466 RUH Usage Desc #003: RUH Attributes: Unused 00:07:50.466 RUH Usage Desc #004: RUH Attributes: Unused 00:07:50.466 RUH Usage Desc #005: RUH Attributes: Unused 00:07:50.466 RUH Usage Desc #006: RUH Attributes: Unused 00:07:50.466 RUH Usage Desc #007: RUH Attributes: Unused 00:07:50.466 00:07:50.466 FDP statistics log page 00:07:50.466 ======================= 00:07:50.466 Host bytes with metadata written: 647471104 00:07:50.466 Me[2024-12-16 19:09:34.793832] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64642 terminated unexpected 00:07:50.466 dia bytes with metadata written: 647573504 00:07:50.466 Media bytes erased: 0 00:07:50.466 00:07:50.466 FDP events log page 00:07:50.466 =================== 00:07:50.466 Number of FDP events: 0 00:07:50.466 00:07:50.466 NVM Specific Namespace Data 00:07:50.466 =========================== 00:07:50.466 Logical Block Storage Tag Mask: 0 00:07:50.466 Protection Information Capabilities: 00:07:50.466 16b Guard Protection Information Storage Tag Support: No 00:07:50.466 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:50.466 Storage Tag Check Read Support: No 00:07:50.466 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.466 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.466 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.466 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.466 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.466 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.466 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.466 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.466 ===================================================== 00:07:50.466 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:50.466 ===================================================== 00:07:50.466 Controller Capabilities/Features 00:07:50.466 ================================ 00:07:50.466 Vendor ID: 1b36 00:07:50.466 Subsystem Vendor ID: 1af4 00:07:50.466 Serial Number: 12342 00:07:50.466 Model Number: QEMU NVMe Ctrl 00:07:50.466 Firmware Version: 8.0.0 00:07:50.466 Recommended Arb Burst: 6 00:07:50.466 IEEE OUI Identifier: 00 54 52 00:07:50.466 Multi-path I/O 00:07:50.466 May have multiple subsystem ports: No 00:07:50.466 May have multiple controllers: No 00:07:50.466 Associated with SR-IOV VF: No 00:07:50.466 Max Data Transfer Size: 524288 00:07:50.466 Max Number of Namespaces: 256 00:07:50.466 Max Number of I/O Queues: 64 00:07:50.466 NVMe Specification Version (VS): 1.4 00:07:50.466 NVMe Specification Version (Identify): 1.4 00:07:50.466 Maximum Queue Entries: 2048 00:07:50.466 Contiguous Queues Required: Yes 00:07:50.466 Arbitration Mechanisms Supported 00:07:50.466 Weighted Round Robin: Not Supported 00:07:50.466 Vendor Specific: Not Supported 00:07:50.466 Reset Timeout: 7500 ms 00:07:50.466 Doorbell Stride: 4 bytes 00:07:50.466 NVM Subsystem Reset: Not Supported 00:07:50.466 Command Sets Supported 00:07:50.466 NVM Command Set: Supported 00:07:50.466 Boot Partition: Not Supported 00:07:50.466 Memory Page Size Minimum: 4096 bytes 00:07:50.466 Memory Page Size Maximum: 65536 bytes 00:07:50.466 Persistent Memory Region: Not Supported 00:07:50.466 Optional Asynchronous Events Supported 00:07:50.466 Namespace Attribute Notices: Supported 00:07:50.466 Firmware Activation Notices: Not Supported 00:07:50.466 ANA Change Notices: Not Supported 00:07:50.466 PLE Aggregate Log Change Notices: Not Supported 00:07:50.466 LBA Status Info Alert Notices: Not Supported 00:07:50.466 EGE Aggregate Log Change Notices: Not Supported 00:07:50.466 Normal NVM Subsystem Shutdown event: Not Supported 00:07:50.466 Zone Descriptor Change Notices: Not Supported 00:07:50.466 Discovery Log Change Notices: Not Supported 00:07:50.466 Controller Attributes 00:07:50.466 128-bit Host Identifier: Not Supported 00:07:50.466 Non-Operational Permissive Mode: Not Supported 00:07:50.466 NVM Sets: Not Supported 00:07:50.466 Read Recovery Levels: Not Supported 00:07:50.466 Endurance Groups: Not Supported 00:07:50.466 Predictable Latency Mode: Not Supported 00:07:50.466 Traffic Based Keep ALive: Not Supported 00:07:50.466 Namespace Granularity: Not Supported 00:07:50.466 SQ Associations: Not Supported 00:07:50.466 UUID List: Not Supported 00:07:50.466 Multi-Domain Subsystem: Not Supported 00:07:50.466 Fixed Capacity Management: Not Supported 00:07:50.466 Variable Capacity Management: Not Supported 00:07:50.466 Delete Endurance Group: Not Supported 00:07:50.466 Delete NVM Set: Not Supported 00:07:50.466 Extended LBA Formats Supported: Supported 00:07:50.466 Flexible Data Placement Supported: Not Supported 00:07:50.466 00:07:50.466 Controller Memory Buffer Support 00:07:50.466 ================================ 00:07:50.466 Supported: No 00:07:50.466 00:07:50.466 Persistent Memory Region Support 00:07:50.466 ================================ 00:07:50.466 Supported: No 00:07:50.466 00:07:50.466 Admin Command Set Attributes 00:07:50.466 ============================ 00:07:50.466 Security Send/Receive: Not Supported 00:07:50.466 Format NVM: Supported 00:07:50.466 Firmware Activate/Download: Not Supported 00:07:50.466 Namespace Management: Supported 00:07:50.466 Device Self-Test: Not Supported 00:07:50.466 Directives: Supported 00:07:50.466 NVMe-MI: Not Supported 00:07:50.466 Virtualization Management: Not Supported 00:07:50.466 Doorbell Buffer Config: Supported 00:07:50.466 Get LBA Status Capability: Not Supported 00:07:50.466 Command & Feature Lockdown Capability: Not Supported 00:07:50.466 Abort Command Limit: 4 00:07:50.466 Async Event Request Limit: 4 00:07:50.466 Number of Firmware Slots: N/A 00:07:50.466 Firmware Slot 1 Read-Only: N/A 00:07:50.466 Firmware Activation Without Reset: N/A 00:07:50.466 Multiple Update Detection Support: N/A 00:07:50.466 Firmware Update Granularity: No Information Provided 00:07:50.466 Per-Namespace SMART Log: Yes 00:07:50.466 Asymmetric Namespace Access Log Page: Not Supported 00:07:50.466 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:50.466 Command Effects Log Page: Supported 00:07:50.466 Get Log Page Extended Data: Supported 00:07:50.466 Telemetry Log Pages: Not Supported 00:07:50.466 Persistent Event Log Pages: Not Supported 00:07:50.466 Supported Log Pages Log Page: May Support 00:07:50.466 Commands Supported & Effects Log Page: Not Supported 00:07:50.466 Feature Identifiers & Effects Log Page:May Support 00:07:50.466 NVMe-MI Commands & Effects Log Page: May Support 00:07:50.466 Data Area 4 for Telemetry Log: Not Supported 00:07:50.466 Error Log Page Entries Supported: 1 00:07:50.466 Keep Alive: Not Supported 00:07:50.466 00:07:50.466 NVM Command Set Attributes 00:07:50.466 ========================== 00:07:50.466 Submission Queue Entry Size 00:07:50.466 Max: 64 00:07:50.466 Min: 64 00:07:50.466 Completion Queue Entry Size 00:07:50.466 Max: 16 00:07:50.466 Min: 16 00:07:50.466 Number of Namespaces: 256 00:07:50.466 Compare Command: Supported 00:07:50.466 Write Uncorrectable Command: Not Supported 00:07:50.466 Dataset Management Command: Supported 00:07:50.466 Write Zeroes Command: Supported 00:07:50.466 Set Features Save Field: Supported 00:07:50.466 Reservations: Not Supported 00:07:50.466 Timestamp: Supported 00:07:50.466 Copy: Supported 00:07:50.466 Volatile Write Cache: Present 00:07:50.466 Atomic Write Unit (Normal): 1 00:07:50.466 Atomic Write Unit (PFail): 1 00:07:50.466 Atomic Compare & Write Unit: 1 00:07:50.466 Fused Compare & Write: Not Supported 00:07:50.466 Scatter-Gather List 00:07:50.466 SGL Command Set: Supported 00:07:50.466 SGL Keyed: Not Supported 00:07:50.467 SGL Bit Bucket Descriptor: Not Supported 00:07:50.467 SGL Metadata Pointer: Not Supported 00:07:50.467 Oversized SGL: Not Supported 00:07:50.467 SGL Metadata Address: Not Supported 00:07:50.467 SGL Offset: Not Supported 00:07:50.467 Transport SGL Data Block: Not Supported 00:07:50.467 Replay Protected Memory Block: Not Supported 00:07:50.467 00:07:50.467 Firmware Slot Information 00:07:50.467 ========================= 00:07:50.467 Active slot: 1 00:07:50.467 Slot 1 Firmware Revision: 1.0 00:07:50.467 00:07:50.467 00:07:50.467 Commands Supported and Effects 00:07:50.467 ============================== 00:07:50.467 Admin Commands 00:07:50.467 -------------- 00:07:50.467 Delete I/O Submission Queue (00h): Supported 00:07:50.467 Create I/O Submission Queue (01h): Supported 00:07:50.467 Get Log Page (02h): Supported 00:07:50.467 Delete I/O Completion Queue (04h): Supported 00:07:50.467 Create I/O Completion Queue (05h): Supported 00:07:50.467 Identify (06h): Supported 00:07:50.467 Abort (08h): Supported 00:07:50.467 Set Features (09h): Supported 00:07:50.467 Get Features (0Ah): Supported 00:07:50.467 Asynchronous Event Request (0Ch): Supported 00:07:50.467 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:50.467 Directive Send (19h): Supported 00:07:50.467 Directive Receive (1Ah): Supported 00:07:50.467 Virtualization Management (1Ch): Supported 00:07:50.467 Doorbell Buffer Config (7Ch): Supported 00:07:50.467 Format NVM (80h): Supported LBA-Change 00:07:50.467 I/O Commands 00:07:50.467 ------------ 00:07:50.467 Flush (00h): Supported LBA-Change 00:07:50.467 Write (01h): Supported LBA-Change 00:07:50.467 Read (02h): Supported 00:07:50.467 Compare (05h): Supported 00:07:50.467 Write Zeroes (08h): Supported LBA-Change 00:07:50.467 Dataset Management (09h): Supported LBA-Change 00:07:50.467 Unknown (0Ch): Supported 00:07:50.467 Unknown (12h): Supported 00:07:50.467 Copy (19h): Supported LBA-Change 00:07:50.467 Unknown (1Dh): Supported LBA-Change 00:07:50.467 00:07:50.467 Error Log 00:07:50.467 ========= 00:07:50.467 00:07:50.467 Arbitration 00:07:50.467 =========== 00:07:50.467 Arbitration Burst: no limit 00:07:50.467 00:07:50.467 Power Management 00:07:50.467 ================ 00:07:50.467 Number of Power States: 1 00:07:50.467 Current Power State: Power State #0 00:07:50.467 Power State #0: 00:07:50.467 Max Power: 25.00 W 00:07:50.467 Non-Operational State: Operational 00:07:50.467 Entry Latency: 16 microseconds 00:07:50.467 Exit Latency: 4 microseconds 00:07:50.467 Relative Read Throughput: 0 00:07:50.467 Relative Read Latency: 0 00:07:50.467 Relative Write Throughput: 0 00:07:50.467 Relative Write Latency: 0 00:07:50.467 Idle Power: Not Reported 00:07:50.467 Active Power: Not Reported 00:07:50.467 Non-Operational Permissive Mode: Not Supported 00:07:50.467 00:07:50.467 Health Information 00:07:50.467 ================== 00:07:50.467 Critical Warnings: 00:07:50.467 Available Spare Space: OK 00:07:50.467 Temperature: OK 00:07:50.467 Device Reliability: OK 00:07:50.467 Read Only: No 00:07:50.467 Volatile Memory Backup: OK 00:07:50.467 Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.467 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:50.467 Available Spare: 0% 00:07:50.467 Available Spare Threshold: 0% 00:07:50.467 Life Percentage Used: 0% 00:07:50.467 Data Units Read: 2293 00:07:50.467 Data Units Written: 2080 00:07:50.467 Host Read Commands: 120932 00:07:50.467 Host Write Commands: 119201 00:07:50.467 Controller Busy Time: 0 minutes 00:07:50.467 Power Cycles: 0 00:07:50.467 Power On Hours: 0 hours 00:07:50.467 Unsafe Shutdowns: 0 00:07:50.467 Unrecoverable Media Errors: 0 00:07:50.467 Lifetime Error Log Entries: 0 00:07:50.467 Warning Temperature Time: 0 minutes 00:07:50.467 Critical Temperature Time: 0 minutes 00:07:50.467 00:07:50.467 Number of Queues 00:07:50.467 ================ 00:07:50.467 Number of I/O Submission Queues: 64 00:07:50.467 Number of I/O Completion Queues: 64 00:07:50.467 00:07:50.467 ZNS Specific Controller Data 00:07:50.467 ============================ 00:07:50.467 Zone Append Size Limit: 0 00:07:50.467 00:07:50.467 00:07:50.467 Active Namespaces 00:07:50.467 ================= 00:07:50.467 Namespace ID:1 00:07:50.467 Error Recovery Timeout: Unlimited 00:07:50.467 Command Set Identifier: NVM (00h) 00:07:50.467 Deallocate: Supported 00:07:50.467 Deallocated/Unwritten Error: Supported 00:07:50.467 Deallocated Read Value: All 0x00 00:07:50.467 Deallocate in Write Zeroes: Not Supported 00:07:50.467 Deallocated Guard Field: 0xFFFF 00:07:50.467 Flush: Supported 00:07:50.467 Reservation: Not Supported 00:07:50.467 Namespace Sharing Capabilities: Private 00:07:50.467 Size (in LBAs): 1048576 (4GiB) 00:07:50.467 Capacity (in LBAs): 1048576 (4GiB) 00:07:50.467 Utilization (in LBAs): 1048576 (4GiB) 00:07:50.467 Thin Provisioning: Not Supported 00:07:50.467 Per-NS Atomic Units: No 00:07:50.467 Maximum Single Source Range Length: 128 00:07:50.467 Maximum Copy Length: 128 00:07:50.467 Maximum Source Range Count: 128 00:07:50.467 NGUID/EUI64 Never Reused: No 00:07:50.467 Namespace Write Protected: No 00:07:50.467 Number of LBA Formats: 8 00:07:50.467 Current LBA Format: LBA Format #04 00:07:50.467 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:50.467 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:50.467 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:50.467 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:50.467 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:50.467 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:50.467 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:50.467 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:50.467 00:07:50.467 NVM Specific Namespace Data 00:07:50.467 =========================== 00:07:50.467 Logical Block Storage Tag Mask: 0 00:07:50.467 Protection Information Capabilities: 00:07:50.467 16b Guard Protection Information Storage Tag Support: No 00:07:50.467 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:50.467 Storage Tag Check Read Support: No 00:07:50.467 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.467 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.467 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.467 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.467 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.467 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.467 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.467 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.467 Namespace ID:2 00:07:50.467 Error Recovery Timeout: Unlimited 00:07:50.467 Command Set Identifier: NVM (00h) 00:07:50.467 Deallocate: Supported 00:07:50.467 Deallocated/Unwritten Error: Supported 00:07:50.467 Deallocated Read Value: All 0x00 00:07:50.467 Deallocate in Write Zeroes: Not Supported 00:07:50.467 Deallocated Guard Field: 0xFFFF 00:07:50.467 Flush: Supported 00:07:50.467 Reservation: Not Supported 00:07:50.467 Namespace Sharing Capabilities: Private 00:07:50.467 Size (in LBAs): 1048576 (4GiB) 00:07:50.467 Capacity (in LBAs): 1048576 (4GiB) 00:07:50.467 Utilization (in LBAs): 1048576 (4GiB) 00:07:50.467 Thin Provisioning: Not Supported 00:07:50.467 Per-NS Atomic Units: No 00:07:50.467 Maximum Single Source Range Length: 128 00:07:50.467 Maximum Copy Length: 128 00:07:50.467 Maximum Source Range Count: 128 00:07:50.467 NGUID/EUI64 Never Reused: No 00:07:50.467 Namespace Write Protected: No 00:07:50.467 Number of LBA Formats: 8 00:07:50.467 Current LBA Format: LBA Format #04 00:07:50.467 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:50.467 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:50.467 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:50.467 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:50.467 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:50.467 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:50.467 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:50.467 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:50.467 00:07:50.467 NVM Specific Namespace Data 00:07:50.467 =========================== 00:07:50.467 Logical Block Storage Tag Mask: 0 00:07:50.467 Protection Information Capabilities: 00:07:50.467 16b Guard Protection Information Storage Tag Support: No 00:07:50.467 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:50.467 Storage Tag Check Read Support: No 00:07:50.467 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.467 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.467 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.467 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.467 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.467 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.467 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.467 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.468 Namespace ID:3 00:07:50.468 Error Recovery Timeout: Unlimited 00:07:50.468 Command Set Identifier: NVM (00h) 00:07:50.468 Deallocate: Supported 00:07:50.468 Deallocated/Unwritten Error: Supported 00:07:50.468 Deallocated Read Value: All 0x00 00:07:50.468 Deallocate in Write Zeroes: Not Supported 00:07:50.468 Deallocated Guard Field: 0xFFFF 00:07:50.468 Flush: Supported 00:07:50.468 Reservation: Not Supported 00:07:50.468 Namespace Sharing Capabilities: Private 00:07:50.468 Size (in LBAs): 1048576 (4GiB) 00:07:50.761 Capacity (in LBAs): 1048576 (4GiB) 00:07:50.761 Utilization (in LBAs): 1048576 (4GiB) 00:07:50.761 Thin Provisioning: Not Supported 00:07:50.761 Per-NS Atomic Units: No 00:07:50.761 Maximum Single Source Range Length: 128 00:07:50.761 Maximum Copy Length: 128 00:07:50.761 Maximum Source Range Count: 128 00:07:50.761 NGUID/EUI64 Never Reused: No 00:07:50.761 Namespace Write Protected: No 00:07:50.761 Number of LBA Formats: 8 00:07:50.761 Current LBA Format: LBA Format #04 00:07:50.761 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:50.761 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:50.761 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:50.761 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:50.761 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:50.761 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:50.761 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:50.761 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:50.761 00:07:50.761 NVM Specific Namespace Data 00:07:50.761 =========================== 00:07:50.761 Logical Block Storage Tag Mask: 0 00:07:50.761 Protection Information Capabilities: 00:07:50.761 16b Guard Protection Information Storage Tag Support: No 00:07:50.761 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:50.761 Storage Tag Check Read Support: No 00:07:50.761 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.761 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.761 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.761 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.761 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.761 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.761 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.761 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.761 19:09:34 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:50.761 19:09:34 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:50.761 ===================================================== 00:07:50.761 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:50.761 ===================================================== 00:07:50.761 Controller Capabilities/Features 00:07:50.761 ================================ 00:07:50.761 Vendor ID: 1b36 00:07:50.761 Subsystem Vendor ID: 1af4 00:07:50.761 Serial Number: 12340 00:07:50.761 Model Number: QEMU NVMe Ctrl 00:07:50.761 Firmware Version: 8.0.0 00:07:50.761 Recommended Arb Burst: 6 00:07:50.761 IEEE OUI Identifier: 00 54 52 00:07:50.761 Multi-path I/O 00:07:50.761 May have multiple subsystem ports: No 00:07:50.761 May have multiple controllers: No 00:07:50.761 Associated with SR-IOV VF: No 00:07:50.761 Max Data Transfer Size: 524288 00:07:50.761 Max Number of Namespaces: 256 00:07:50.761 Max Number of I/O Queues: 64 00:07:50.761 NVMe Specification Version (VS): 1.4 00:07:50.761 NVMe Specification Version (Identify): 1.4 00:07:50.761 Maximum Queue Entries: 2048 00:07:50.761 Contiguous Queues Required: Yes 00:07:50.761 Arbitration Mechanisms Supported 00:07:50.762 Weighted Round Robin: Not Supported 00:07:50.762 Vendor Specific: Not Supported 00:07:50.762 Reset Timeout: 7500 ms 00:07:50.762 Doorbell Stride: 4 bytes 00:07:50.762 NVM Subsystem Reset: Not Supported 00:07:50.762 Command Sets Supported 00:07:50.762 NVM Command Set: Supported 00:07:50.762 Boot Partition: Not Supported 00:07:50.762 Memory Page Size Minimum: 4096 bytes 00:07:50.762 Memory Page Size Maximum: 65536 bytes 00:07:50.762 Persistent Memory Region: Not Supported 00:07:50.762 Optional Asynchronous Events Supported 00:07:50.762 Namespace Attribute Notices: Supported 00:07:50.762 Firmware Activation Notices: Not Supported 00:07:50.762 ANA Change Notices: Not Supported 00:07:50.762 PLE Aggregate Log Change Notices: Not Supported 00:07:50.762 LBA Status Info Alert Notices: Not Supported 00:07:50.762 EGE Aggregate Log Change Notices: Not Supported 00:07:50.762 Normal NVM Subsystem Shutdown event: Not Supported 00:07:50.762 Zone Descriptor Change Notices: Not Supported 00:07:50.762 Discovery Log Change Notices: Not Supported 00:07:50.762 Controller Attributes 00:07:50.762 128-bit Host Identifier: Not Supported 00:07:50.762 Non-Operational Permissive Mode: Not Supported 00:07:50.762 NVM Sets: Not Supported 00:07:50.762 Read Recovery Levels: Not Supported 00:07:50.762 Endurance Groups: Not Supported 00:07:50.762 Predictable Latency Mode: Not Supported 00:07:50.762 Traffic Based Keep ALive: Not Supported 00:07:50.762 Namespace Granularity: Not Supported 00:07:50.762 SQ Associations: Not Supported 00:07:50.762 UUID List: Not Supported 00:07:50.762 Multi-Domain Subsystem: Not Supported 00:07:50.762 Fixed Capacity Management: Not Supported 00:07:50.762 Variable Capacity Management: Not Supported 00:07:50.762 Delete Endurance Group: Not Supported 00:07:50.762 Delete NVM Set: Not Supported 00:07:50.762 Extended LBA Formats Supported: Supported 00:07:50.762 Flexible Data Placement Supported: Not Supported 00:07:50.762 00:07:50.762 Controller Memory Buffer Support 00:07:50.762 ================================ 00:07:50.762 Supported: No 00:07:50.762 00:07:50.762 Persistent Memory Region Support 00:07:50.762 ================================ 00:07:50.762 Supported: No 00:07:50.762 00:07:50.762 Admin Command Set Attributes 00:07:50.762 ============================ 00:07:50.762 Security Send/Receive: Not Supported 00:07:50.762 Format NVM: Supported 00:07:50.762 Firmware Activate/Download: Not Supported 00:07:50.762 Namespace Management: Supported 00:07:50.762 Device Self-Test: Not Supported 00:07:50.762 Directives: Supported 00:07:50.762 NVMe-MI: Not Supported 00:07:50.762 Virtualization Management: Not Supported 00:07:50.762 Doorbell Buffer Config: Supported 00:07:50.762 Get LBA Status Capability: Not Supported 00:07:50.762 Command & Feature Lockdown Capability: Not Supported 00:07:50.762 Abort Command Limit: 4 00:07:50.762 Async Event Request Limit: 4 00:07:50.762 Number of Firmware Slots: N/A 00:07:50.762 Firmware Slot 1 Read-Only: N/A 00:07:50.762 Firmware Activation Without Reset: N/A 00:07:50.762 Multiple Update Detection Support: N/A 00:07:50.762 Firmware Update Granularity: No Information Provided 00:07:50.762 Per-Namespace SMART Log: Yes 00:07:50.762 Asymmetric Namespace Access Log Page: Not Supported 00:07:50.762 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:50.762 Command Effects Log Page: Supported 00:07:50.762 Get Log Page Extended Data: Supported 00:07:50.762 Telemetry Log Pages: Not Supported 00:07:50.762 Persistent Event Log Pages: Not Supported 00:07:50.762 Supported Log Pages Log Page: May Support 00:07:50.762 Commands Supported & Effects Log Page: Not Supported 00:07:50.762 Feature Identifiers & Effects Log Page:May Support 00:07:50.762 NVMe-MI Commands & Effects Log Page: May Support 00:07:50.762 Data Area 4 for Telemetry Log: Not Supported 00:07:50.762 Error Log Page Entries Supported: 1 00:07:50.762 Keep Alive: Not Supported 00:07:50.762 00:07:50.762 NVM Command Set Attributes 00:07:50.762 ========================== 00:07:50.762 Submission Queue Entry Size 00:07:50.762 Max: 64 00:07:50.762 Min: 64 00:07:50.762 Completion Queue Entry Size 00:07:50.762 Max: 16 00:07:50.762 Min: 16 00:07:50.762 Number of Namespaces: 256 00:07:50.762 Compare Command: Supported 00:07:50.762 Write Uncorrectable Command: Not Supported 00:07:50.762 Dataset Management Command: Supported 00:07:50.762 Write Zeroes Command: Supported 00:07:50.762 Set Features Save Field: Supported 00:07:50.762 Reservations: Not Supported 00:07:50.762 Timestamp: Supported 00:07:50.762 Copy: Supported 00:07:50.762 Volatile Write Cache: Present 00:07:50.762 Atomic Write Unit (Normal): 1 00:07:50.762 Atomic Write Unit (PFail): 1 00:07:50.762 Atomic Compare & Write Unit: 1 00:07:50.762 Fused Compare & Write: Not Supported 00:07:50.762 Scatter-Gather List 00:07:50.762 SGL Command Set: Supported 00:07:50.762 SGL Keyed: Not Supported 00:07:50.762 SGL Bit Bucket Descriptor: Not Supported 00:07:50.762 SGL Metadata Pointer: Not Supported 00:07:50.762 Oversized SGL: Not Supported 00:07:50.762 SGL Metadata Address: Not Supported 00:07:50.762 SGL Offset: Not Supported 00:07:50.762 Transport SGL Data Block: Not Supported 00:07:50.762 Replay Protected Memory Block: Not Supported 00:07:50.762 00:07:50.762 Firmware Slot Information 00:07:50.762 ========================= 00:07:50.762 Active slot: 1 00:07:50.762 Slot 1 Firmware Revision: 1.0 00:07:50.762 00:07:50.762 00:07:50.762 Commands Supported and Effects 00:07:50.762 ============================== 00:07:50.762 Admin Commands 00:07:50.762 -------------- 00:07:50.762 Delete I/O Submission Queue (00h): Supported 00:07:50.762 Create I/O Submission Queue (01h): Supported 00:07:50.762 Get Log Page (02h): Supported 00:07:50.762 Delete I/O Completion Queue (04h): Supported 00:07:50.762 Create I/O Completion Queue (05h): Supported 00:07:50.762 Identify (06h): Supported 00:07:50.762 Abort (08h): Supported 00:07:50.762 Set Features (09h): Supported 00:07:50.762 Get Features (0Ah): Supported 00:07:50.762 Asynchronous Event Request (0Ch): Supported 00:07:50.762 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:50.762 Directive Send (19h): Supported 00:07:50.762 Directive Receive (1Ah): Supported 00:07:50.762 Virtualization Management (1Ch): Supported 00:07:50.762 Doorbell Buffer Config (7Ch): Supported 00:07:50.762 Format NVM (80h): Supported LBA-Change 00:07:50.762 I/O Commands 00:07:50.762 ------------ 00:07:50.762 Flush (00h): Supported LBA-Change 00:07:50.762 Write (01h): Supported LBA-Change 00:07:50.762 Read (02h): Supported 00:07:50.762 Compare (05h): Supported 00:07:50.762 Write Zeroes (08h): Supported LBA-Change 00:07:50.762 Dataset Management (09h): Supported LBA-Change 00:07:50.762 Unknown (0Ch): Supported 00:07:50.762 Unknown (12h): Supported 00:07:50.762 Copy (19h): Supported LBA-Change 00:07:50.762 Unknown (1Dh): Supported LBA-Change 00:07:50.762 00:07:50.762 Error Log 00:07:50.762 ========= 00:07:50.762 00:07:50.762 Arbitration 00:07:50.762 =========== 00:07:50.762 Arbitration Burst: no limit 00:07:50.762 00:07:50.762 Power Management 00:07:50.762 ================ 00:07:50.762 Number of Power States: 1 00:07:50.762 Current Power State: Power State #0 00:07:50.762 Power State #0: 00:07:50.762 Max Power: 25.00 W 00:07:50.762 Non-Operational State: Operational 00:07:50.762 Entry Latency: 16 microseconds 00:07:50.762 Exit Latency: 4 microseconds 00:07:50.762 Relative Read Throughput: 0 00:07:50.762 Relative Read Latency: 0 00:07:50.763 Relative Write Throughput: 0 00:07:50.763 Relative Write Latency: 0 00:07:50.763 Idle Power: Not Reported 00:07:50.763 Active Power: Not Reported 00:07:50.763 Non-Operational Permissive Mode: Not Supported 00:07:50.763 00:07:50.763 Health Information 00:07:50.763 ================== 00:07:50.763 Critical Warnings: 00:07:50.763 Available Spare Space: OK 00:07:50.763 Temperature: OK 00:07:50.763 Device Reliability: OK 00:07:50.763 Read Only: No 00:07:50.763 Volatile Memory Backup: OK 00:07:50.763 Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.763 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:50.763 Available Spare: 0% 00:07:50.763 Available Spare Threshold: 0% 00:07:50.763 Life Percentage Used: 0% 00:07:50.763 Data Units Read: 654 00:07:50.763 Data Units Written: 582 00:07:50.763 Host Read Commands: 39089 00:07:50.763 Host Write Commands: 38875 00:07:50.763 Controller Busy Time: 0 minutes 00:07:50.763 Power Cycles: 0 00:07:50.763 Power On Hours: 0 hours 00:07:50.763 Unsafe Shutdowns: 0 00:07:50.763 Unrecoverable Media Errors: 0 00:07:50.763 Lifetime Error Log Entries: 0 00:07:50.763 Warning Temperature Time: 0 minutes 00:07:50.763 Critical Temperature Time: 0 minutes 00:07:50.763 00:07:50.763 Number of Queues 00:07:50.763 ================ 00:07:50.763 Number of I/O Submission Queues: 64 00:07:50.763 Number of I/O Completion Queues: 64 00:07:50.763 00:07:50.763 ZNS Specific Controller Data 00:07:50.763 ============================ 00:07:50.763 Zone Append Size Limit: 0 00:07:50.763 00:07:50.763 00:07:50.763 Active Namespaces 00:07:50.763 ================= 00:07:50.763 Namespace ID:1 00:07:50.763 Error Recovery Timeout: Unlimited 00:07:50.763 Command Set Identifier: NVM (00h) 00:07:50.763 Deallocate: Supported 00:07:50.763 Deallocated/Unwritten Error: Supported 00:07:50.763 Deallocated Read Value: All 0x00 00:07:50.763 Deallocate in Write Zeroes: Not Supported 00:07:50.763 Deallocated Guard Field: 0xFFFF 00:07:50.763 Flush: Supported 00:07:50.763 Reservation: Not Supported 00:07:50.763 Metadata Transferred as: Separate Metadata Buffer 00:07:50.763 Namespace Sharing Capabilities: Private 00:07:50.763 Size (in LBAs): 1548666 (5GiB) 00:07:50.763 Capacity (in LBAs): 1548666 (5GiB) 00:07:50.763 Utilization (in LBAs): 1548666 (5GiB) 00:07:50.763 Thin Provisioning: Not Supported 00:07:50.763 Per-NS Atomic Units: No 00:07:50.763 Maximum Single Source Range Length: 128 00:07:50.763 Maximum Copy Length: 128 00:07:50.763 Maximum Source Range Count: 128 00:07:50.763 NGUID/EUI64 Never Reused: No 00:07:50.763 Namespace Write Protected: No 00:07:50.763 Number of LBA Formats: 8 00:07:50.763 Current LBA Format: LBA Format #07 00:07:50.763 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:50.763 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:50.763 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:50.763 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:50.763 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:50.763 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:50.763 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:50.763 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:50.763 00:07:50.763 NVM Specific Namespace Data 00:07:50.763 =========================== 00:07:50.763 Logical Block Storage Tag Mask: 0 00:07:50.763 Protection Information Capabilities: 00:07:50.763 16b Guard Protection Information Storage Tag Support: No 00:07:50.763 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:50.763 Storage Tag Check Read Support: No 00:07:50.763 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.763 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.763 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.763 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.763 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.763 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.763 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.763 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.763 19:09:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:50.763 19:09:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:51.028 ===================================================== 00:07:51.028 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:51.028 ===================================================== 00:07:51.028 Controller Capabilities/Features 00:07:51.028 ================================ 00:07:51.028 Vendor ID: 1b36 00:07:51.028 Subsystem Vendor ID: 1af4 00:07:51.028 Serial Number: 12341 00:07:51.028 Model Number: QEMU NVMe Ctrl 00:07:51.028 Firmware Version: 8.0.0 00:07:51.028 Recommended Arb Burst: 6 00:07:51.028 IEEE OUI Identifier: 00 54 52 00:07:51.028 Multi-path I/O 00:07:51.028 May have multiple subsystem ports: No 00:07:51.028 May have multiple controllers: No 00:07:51.028 Associated with SR-IOV VF: No 00:07:51.028 Max Data Transfer Size: 524288 00:07:51.028 Max Number of Namespaces: 256 00:07:51.028 Max Number of I/O Queues: 64 00:07:51.028 NVMe Specification Version (VS): 1.4 00:07:51.028 NVMe Specification Version (Identify): 1.4 00:07:51.028 Maximum Queue Entries: 2048 00:07:51.028 Contiguous Queues Required: Yes 00:07:51.028 Arbitration Mechanisms Supported 00:07:51.028 Weighted Round Robin: Not Supported 00:07:51.028 Vendor Specific: Not Supported 00:07:51.028 Reset Timeout: 7500 ms 00:07:51.028 Doorbell Stride: 4 bytes 00:07:51.028 NVM Subsystem Reset: Not Supported 00:07:51.028 Command Sets Supported 00:07:51.028 NVM Command Set: Supported 00:07:51.028 Boot Partition: Not Supported 00:07:51.028 Memory Page Size Minimum: 4096 bytes 00:07:51.028 Memory Page Size Maximum: 65536 bytes 00:07:51.028 Persistent Memory Region: Not Supported 00:07:51.028 Optional Asynchronous Events Supported 00:07:51.028 Namespace Attribute Notices: Supported 00:07:51.028 Firmware Activation Notices: Not Supported 00:07:51.028 ANA Change Notices: Not Supported 00:07:51.028 PLE Aggregate Log Change Notices: Not Supported 00:07:51.028 LBA Status Info Alert Notices: Not Supported 00:07:51.028 EGE Aggregate Log Change Notices: Not Supported 00:07:51.028 Normal NVM Subsystem Shutdown event: Not Supported 00:07:51.028 Zone Descriptor Change Notices: Not Supported 00:07:51.028 Discovery Log Change Notices: Not Supported 00:07:51.028 Controller Attributes 00:07:51.028 128-bit Host Identifier: Not Supported 00:07:51.028 Non-Operational Permissive Mode: Not Supported 00:07:51.028 NVM Sets: Not Supported 00:07:51.028 Read Recovery Levels: Not Supported 00:07:51.028 Endurance Groups: Not Supported 00:07:51.028 Predictable Latency Mode: Not Supported 00:07:51.028 Traffic Based Keep ALive: Not Supported 00:07:51.028 Namespace Granularity: Not Supported 00:07:51.028 SQ Associations: Not Supported 00:07:51.028 UUID List: Not Supported 00:07:51.028 Multi-Domain Subsystem: Not Supported 00:07:51.028 Fixed Capacity Management: Not Supported 00:07:51.028 Variable Capacity Management: Not Supported 00:07:51.028 Delete Endurance Group: Not Supported 00:07:51.028 Delete NVM Set: Not Supported 00:07:51.028 Extended LBA Formats Supported: Supported 00:07:51.029 Flexible Data Placement Supported: Not Supported 00:07:51.029 00:07:51.029 Controller Memory Buffer Support 00:07:51.029 ================================ 00:07:51.029 Supported: No 00:07:51.029 00:07:51.029 Persistent Memory Region Support 00:07:51.029 ================================ 00:07:51.029 Supported: No 00:07:51.029 00:07:51.029 Admin Command Set Attributes 00:07:51.029 ============================ 00:07:51.029 Security Send/Receive: Not Supported 00:07:51.029 Format NVM: Supported 00:07:51.029 Firmware Activate/Download: Not Supported 00:07:51.029 Namespace Management: Supported 00:07:51.029 Device Self-Test: Not Supported 00:07:51.029 Directives: Supported 00:07:51.029 NVMe-MI: Not Supported 00:07:51.029 Virtualization Management: Not Supported 00:07:51.029 Doorbell Buffer Config: Supported 00:07:51.029 Get LBA Status Capability: Not Supported 00:07:51.029 Command & Feature Lockdown Capability: Not Supported 00:07:51.029 Abort Command Limit: 4 00:07:51.029 Async Event Request Limit: 4 00:07:51.029 Number of Firmware Slots: N/A 00:07:51.029 Firmware Slot 1 Read-Only: N/A 00:07:51.029 Firmware Activation Without Reset: N/A 00:07:51.029 Multiple Update Detection Support: N/A 00:07:51.029 Firmware Update Granularity: No Information Provided 00:07:51.029 Per-Namespace SMART Log: Yes 00:07:51.029 Asymmetric Namespace Access Log Page: Not Supported 00:07:51.029 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:51.029 Command Effects Log Page: Supported 00:07:51.029 Get Log Page Extended Data: Supported 00:07:51.029 Telemetry Log Pages: Not Supported 00:07:51.029 Persistent Event Log Pages: Not Supported 00:07:51.029 Supported Log Pages Log Page: May Support 00:07:51.029 Commands Supported & Effects Log Page: Not Supported 00:07:51.029 Feature Identifiers & Effects Log Page:May Support 00:07:51.029 NVMe-MI Commands & Effects Log Page: May Support 00:07:51.029 Data Area 4 for Telemetry Log: Not Supported 00:07:51.029 Error Log Page Entries Supported: 1 00:07:51.029 Keep Alive: Not Supported 00:07:51.029 00:07:51.029 NVM Command Set Attributes 00:07:51.029 ========================== 00:07:51.029 Submission Queue Entry Size 00:07:51.029 Max: 64 00:07:51.029 Min: 64 00:07:51.029 Completion Queue Entry Size 00:07:51.029 Max: 16 00:07:51.029 Min: 16 00:07:51.029 Number of Namespaces: 256 00:07:51.029 Compare Command: Supported 00:07:51.029 Write Uncorrectable Command: Not Supported 00:07:51.029 Dataset Management Command: Supported 00:07:51.029 Write Zeroes Command: Supported 00:07:51.029 Set Features Save Field: Supported 00:07:51.029 Reservations: Not Supported 00:07:51.029 Timestamp: Supported 00:07:51.029 Copy: Supported 00:07:51.029 Volatile Write Cache: Present 00:07:51.029 Atomic Write Unit (Normal): 1 00:07:51.029 Atomic Write Unit (PFail): 1 00:07:51.029 Atomic Compare & Write Unit: 1 00:07:51.029 Fused Compare & Write: Not Supported 00:07:51.029 Scatter-Gather List 00:07:51.029 SGL Command Set: Supported 00:07:51.029 SGL Keyed: Not Supported 00:07:51.029 SGL Bit Bucket Descriptor: Not Supported 00:07:51.029 SGL Metadata Pointer: Not Supported 00:07:51.029 Oversized SGL: Not Supported 00:07:51.029 SGL Metadata Address: Not Supported 00:07:51.029 SGL Offset: Not Supported 00:07:51.029 Transport SGL Data Block: Not Supported 00:07:51.029 Replay Protected Memory Block: Not Supported 00:07:51.029 00:07:51.029 Firmware Slot Information 00:07:51.029 ========================= 00:07:51.029 Active slot: 1 00:07:51.029 Slot 1 Firmware Revision: 1.0 00:07:51.029 00:07:51.029 00:07:51.029 Commands Supported and Effects 00:07:51.029 ============================== 00:07:51.029 Admin Commands 00:07:51.029 -------------- 00:07:51.029 Delete I/O Submission Queue (00h): Supported 00:07:51.029 Create I/O Submission Queue (01h): Supported 00:07:51.029 Get Log Page (02h): Supported 00:07:51.029 Delete I/O Completion Queue (04h): Supported 00:07:51.029 Create I/O Completion Queue (05h): Supported 00:07:51.029 Identify (06h): Supported 00:07:51.029 Abort (08h): Supported 00:07:51.029 Set Features (09h): Supported 00:07:51.029 Get Features (0Ah): Supported 00:07:51.029 Asynchronous Event Request (0Ch): Supported 00:07:51.029 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:51.029 Directive Send (19h): Supported 00:07:51.029 Directive Receive (1Ah): Supported 00:07:51.029 Virtualization Management (1Ch): Supported 00:07:51.029 Doorbell Buffer Config (7Ch): Supported 00:07:51.029 Format NVM (80h): Supported LBA-Change 00:07:51.029 I/O Commands 00:07:51.029 ------------ 00:07:51.029 Flush (00h): Supported LBA-Change 00:07:51.029 Write (01h): Supported LBA-Change 00:07:51.029 Read (02h): Supported 00:07:51.029 Compare (05h): Supported 00:07:51.029 Write Zeroes (08h): Supported LBA-Change 00:07:51.029 Dataset Management (09h): Supported LBA-Change 00:07:51.029 Unknown (0Ch): Supported 00:07:51.029 Unknown (12h): Supported 00:07:51.029 Copy (19h): Supported LBA-Change 00:07:51.029 Unknown (1Dh): Supported LBA-Change 00:07:51.029 00:07:51.029 Error Log 00:07:51.029 ========= 00:07:51.029 00:07:51.029 Arbitration 00:07:51.029 =========== 00:07:51.029 Arbitration Burst: no limit 00:07:51.029 00:07:51.029 Power Management 00:07:51.029 ================ 00:07:51.029 Number of Power States: 1 00:07:51.029 Current Power State: Power State #0 00:07:51.029 Power State #0: 00:07:51.029 Max Power: 25.00 W 00:07:51.029 Non-Operational State: Operational 00:07:51.029 Entry Latency: 16 microseconds 00:07:51.029 Exit Latency: 4 microseconds 00:07:51.029 Relative Read Throughput: 0 00:07:51.029 Relative Read Latency: 0 00:07:51.029 Relative Write Throughput: 0 00:07:51.029 Relative Write Latency: 0 00:07:51.029 Idle Power: Not Reported 00:07:51.029 Active Power: Not Reported 00:07:51.029 Non-Operational Permissive Mode: Not Supported 00:07:51.029 00:07:51.029 Health Information 00:07:51.029 ================== 00:07:51.029 Critical Warnings: 00:07:51.029 Available Spare Space: OK 00:07:51.029 Temperature: OK 00:07:51.029 Device Reliability: OK 00:07:51.029 Read Only: No 00:07:51.029 Volatile Memory Backup: OK 00:07:51.029 Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.029 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:51.029 Available Spare: 0% 00:07:51.029 Available Spare Threshold: 0% 00:07:51.029 Life Percentage Used: 0% 00:07:51.029 Data Units Read: 1001 00:07:51.029 Data Units Written: 875 00:07:51.029 Host Read Commands: 56838 00:07:51.029 Host Write Commands: 55744 00:07:51.029 Controller Busy Time: 0 minutes 00:07:51.029 Power Cycles: 0 00:07:51.029 Power On Hours: 0 hours 00:07:51.029 Unsafe Shutdowns: 0 00:07:51.029 Unrecoverable Media Errors: 0 00:07:51.029 Lifetime Error Log Entries: 0 00:07:51.029 Warning Temperature Time: 0 minutes 00:07:51.029 Critical Temperature Time: 0 minutes 00:07:51.029 00:07:51.029 Number of Queues 00:07:51.029 ================ 00:07:51.029 Number of I/O Submission Queues: 64 00:07:51.029 Number of I/O Completion Queues: 64 00:07:51.029 00:07:51.029 ZNS Specific Controller Data 00:07:51.029 ============================ 00:07:51.029 Zone Append Size Limit: 0 00:07:51.029 00:07:51.029 00:07:51.029 Active Namespaces 00:07:51.029 ================= 00:07:51.029 Namespace ID:1 00:07:51.029 Error Recovery Timeout: Unlimited 00:07:51.029 Command Set Identifier: NVM (00h) 00:07:51.029 Deallocate: Supported 00:07:51.029 Deallocated/Unwritten Error: Supported 00:07:51.029 Deallocated Read Value: All 0x00 00:07:51.029 Deallocate in Write Zeroes: Not Supported 00:07:51.029 Deallocated Guard Field: 0xFFFF 00:07:51.029 Flush: Supported 00:07:51.029 Reservation: Not Supported 00:07:51.029 Namespace Sharing Capabilities: Private 00:07:51.029 Size (in LBAs): 1310720 (5GiB) 00:07:51.029 Capacity (in LBAs): 1310720 (5GiB) 00:07:51.029 Utilization (in LBAs): 1310720 (5GiB) 00:07:51.029 Thin Provisioning: Not Supported 00:07:51.029 Per-NS Atomic Units: No 00:07:51.029 Maximum Single Source Range Length: 128 00:07:51.029 Maximum Copy Length: 128 00:07:51.029 Maximum Source Range Count: 128 00:07:51.029 NGUID/EUI64 Never Reused: No 00:07:51.029 Namespace Write Protected: No 00:07:51.029 Number of LBA Formats: 8 00:07:51.029 Current LBA Format: LBA Format #04 00:07:51.029 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.029 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.029 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.029 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.030 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.030 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.030 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.030 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.030 00:07:51.030 NVM Specific Namespace Data 00:07:51.030 =========================== 00:07:51.030 Logical Block Storage Tag Mask: 0 00:07:51.030 Protection Information Capabilities: 00:07:51.030 16b Guard Protection Information Storage Tag Support: No 00:07:51.030 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.030 Storage Tag Check Read Support: No 00:07:51.030 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.030 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.030 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.030 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.030 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.030 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.030 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.030 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.030 19:09:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:51.030 19:09:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:51.289 ===================================================== 00:07:51.289 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:51.289 ===================================================== 00:07:51.289 Controller Capabilities/Features 00:07:51.289 ================================ 00:07:51.289 Vendor ID: 1b36 00:07:51.289 Subsystem Vendor ID: 1af4 00:07:51.289 Serial Number: 12342 00:07:51.289 Model Number: QEMU NVMe Ctrl 00:07:51.289 Firmware Version: 8.0.0 00:07:51.289 Recommended Arb Burst: 6 00:07:51.289 IEEE OUI Identifier: 00 54 52 00:07:51.289 Multi-path I/O 00:07:51.289 May have multiple subsystem ports: No 00:07:51.289 May have multiple controllers: No 00:07:51.289 Associated with SR-IOV VF: No 00:07:51.289 Max Data Transfer Size: 524288 00:07:51.289 Max Number of Namespaces: 256 00:07:51.289 Max Number of I/O Queues: 64 00:07:51.289 NVMe Specification Version (VS): 1.4 00:07:51.289 NVMe Specification Version (Identify): 1.4 00:07:51.289 Maximum Queue Entries: 2048 00:07:51.289 Contiguous Queues Required: Yes 00:07:51.289 Arbitration Mechanisms Supported 00:07:51.289 Weighted Round Robin: Not Supported 00:07:51.289 Vendor Specific: Not Supported 00:07:51.289 Reset Timeout: 7500 ms 00:07:51.289 Doorbell Stride: 4 bytes 00:07:51.289 NVM Subsystem Reset: Not Supported 00:07:51.289 Command Sets Supported 00:07:51.289 NVM Command Set: Supported 00:07:51.289 Boot Partition: Not Supported 00:07:51.289 Memory Page Size Minimum: 4096 bytes 00:07:51.289 Memory Page Size Maximum: 65536 bytes 00:07:51.289 Persistent Memory Region: Not Supported 00:07:51.289 Optional Asynchronous Events Supported 00:07:51.289 Namespace Attribute Notices: Supported 00:07:51.289 Firmware Activation Notices: Not Supported 00:07:51.289 ANA Change Notices: Not Supported 00:07:51.289 PLE Aggregate Log Change Notices: Not Supported 00:07:51.289 LBA Status Info Alert Notices: Not Supported 00:07:51.289 EGE Aggregate Log Change Notices: Not Supported 00:07:51.289 Normal NVM Subsystem Shutdown event: Not Supported 00:07:51.289 Zone Descriptor Change Notices: Not Supported 00:07:51.289 Discovery Log Change Notices: Not Supported 00:07:51.289 Controller Attributes 00:07:51.289 128-bit Host Identifier: Not Supported 00:07:51.289 Non-Operational Permissive Mode: Not Supported 00:07:51.289 NVM Sets: Not Supported 00:07:51.289 Read Recovery Levels: Not Supported 00:07:51.289 Endurance Groups: Not Supported 00:07:51.289 Predictable Latency Mode: Not Supported 00:07:51.289 Traffic Based Keep ALive: Not Supported 00:07:51.289 Namespace Granularity: Not Supported 00:07:51.289 SQ Associations: Not Supported 00:07:51.289 UUID List: Not Supported 00:07:51.289 Multi-Domain Subsystem: Not Supported 00:07:51.289 Fixed Capacity Management: Not Supported 00:07:51.289 Variable Capacity Management: Not Supported 00:07:51.289 Delete Endurance Group: Not Supported 00:07:51.289 Delete NVM Set: Not Supported 00:07:51.289 Extended LBA Formats Supported: Supported 00:07:51.289 Flexible Data Placement Supported: Not Supported 00:07:51.289 00:07:51.289 Controller Memory Buffer Support 00:07:51.289 ================================ 00:07:51.289 Supported: No 00:07:51.289 00:07:51.289 Persistent Memory Region Support 00:07:51.289 ================================ 00:07:51.289 Supported: No 00:07:51.289 00:07:51.289 Admin Command Set Attributes 00:07:51.289 ============================ 00:07:51.289 Security Send/Receive: Not Supported 00:07:51.289 Format NVM: Supported 00:07:51.289 Firmware Activate/Download: Not Supported 00:07:51.289 Namespace Management: Supported 00:07:51.289 Device Self-Test: Not Supported 00:07:51.289 Directives: Supported 00:07:51.289 NVMe-MI: Not Supported 00:07:51.289 Virtualization Management: Not Supported 00:07:51.289 Doorbell Buffer Config: Supported 00:07:51.289 Get LBA Status Capability: Not Supported 00:07:51.289 Command & Feature Lockdown Capability: Not Supported 00:07:51.289 Abort Command Limit: 4 00:07:51.289 Async Event Request Limit: 4 00:07:51.289 Number of Firmware Slots: N/A 00:07:51.289 Firmware Slot 1 Read-Only: N/A 00:07:51.289 Firmware Activation Without Reset: N/A 00:07:51.289 Multiple Update Detection Support: N/A 00:07:51.289 Firmware Update Granularity: No Information Provided 00:07:51.289 Per-Namespace SMART Log: Yes 00:07:51.289 Asymmetric Namespace Access Log Page: Not Supported 00:07:51.289 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:51.289 Command Effects Log Page: Supported 00:07:51.289 Get Log Page Extended Data: Supported 00:07:51.289 Telemetry Log Pages: Not Supported 00:07:51.289 Persistent Event Log Pages: Not Supported 00:07:51.289 Supported Log Pages Log Page: May Support 00:07:51.289 Commands Supported & Effects Log Page: Not Supported 00:07:51.289 Feature Identifiers & Effects Log Page:May Support 00:07:51.289 NVMe-MI Commands & Effects Log Page: May Support 00:07:51.289 Data Area 4 for Telemetry Log: Not Supported 00:07:51.289 Error Log Page Entries Supported: 1 00:07:51.289 Keep Alive: Not Supported 00:07:51.289 00:07:51.289 NVM Command Set Attributes 00:07:51.289 ========================== 00:07:51.289 Submission Queue Entry Size 00:07:51.289 Max: 64 00:07:51.289 Min: 64 00:07:51.289 Completion Queue Entry Size 00:07:51.289 Max: 16 00:07:51.289 Min: 16 00:07:51.289 Number of Namespaces: 256 00:07:51.289 Compare Command: Supported 00:07:51.289 Write Uncorrectable Command: Not Supported 00:07:51.289 Dataset Management Command: Supported 00:07:51.289 Write Zeroes Command: Supported 00:07:51.289 Set Features Save Field: Supported 00:07:51.289 Reservations: Not Supported 00:07:51.289 Timestamp: Supported 00:07:51.289 Copy: Supported 00:07:51.289 Volatile Write Cache: Present 00:07:51.289 Atomic Write Unit (Normal): 1 00:07:51.289 Atomic Write Unit (PFail): 1 00:07:51.289 Atomic Compare & Write Unit: 1 00:07:51.289 Fused Compare & Write: Not Supported 00:07:51.289 Scatter-Gather List 00:07:51.289 SGL Command Set: Supported 00:07:51.289 SGL Keyed: Not Supported 00:07:51.289 SGL Bit Bucket Descriptor: Not Supported 00:07:51.289 SGL Metadata Pointer: Not Supported 00:07:51.289 Oversized SGL: Not Supported 00:07:51.289 SGL Metadata Address: Not Supported 00:07:51.289 SGL Offset: Not Supported 00:07:51.289 Transport SGL Data Block: Not Supported 00:07:51.289 Replay Protected Memory Block: Not Supported 00:07:51.289 00:07:51.289 Firmware Slot Information 00:07:51.289 ========================= 00:07:51.289 Active slot: 1 00:07:51.289 Slot 1 Firmware Revision: 1.0 00:07:51.289 00:07:51.289 00:07:51.289 Commands Supported and Effects 00:07:51.289 ============================== 00:07:51.289 Admin Commands 00:07:51.289 -------------- 00:07:51.289 Delete I/O Submission Queue (00h): Supported 00:07:51.289 Create I/O Submission Queue (01h): Supported 00:07:51.289 Get Log Page (02h): Supported 00:07:51.289 Delete I/O Completion Queue (04h): Supported 00:07:51.290 Create I/O Completion Queue (05h): Supported 00:07:51.290 Identify (06h): Supported 00:07:51.290 Abort (08h): Supported 00:07:51.290 Set Features (09h): Supported 00:07:51.290 Get Features (0Ah): Supported 00:07:51.290 Asynchronous Event Request (0Ch): Supported 00:07:51.290 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:51.290 Directive Send (19h): Supported 00:07:51.290 Directive Receive (1Ah): Supported 00:07:51.290 Virtualization Management (1Ch): Supported 00:07:51.290 Doorbell Buffer Config (7Ch): Supported 00:07:51.290 Format NVM (80h): Supported LBA-Change 00:07:51.290 I/O Commands 00:07:51.290 ------------ 00:07:51.290 Flush (00h): Supported LBA-Change 00:07:51.290 Write (01h): Supported LBA-Change 00:07:51.290 Read (02h): Supported 00:07:51.290 Compare (05h): Supported 00:07:51.290 Write Zeroes (08h): Supported LBA-Change 00:07:51.290 Dataset Management (09h): Supported LBA-Change 00:07:51.290 Unknown (0Ch): Supported 00:07:51.290 Unknown (12h): Supported 00:07:51.290 Copy (19h): Supported LBA-Change 00:07:51.290 Unknown (1Dh): Supported LBA-Change 00:07:51.290 00:07:51.290 Error Log 00:07:51.290 ========= 00:07:51.290 00:07:51.290 Arbitration 00:07:51.290 =========== 00:07:51.290 Arbitration Burst: no limit 00:07:51.290 00:07:51.290 Power Management 00:07:51.290 ================ 00:07:51.290 Number of Power States: 1 00:07:51.290 Current Power State: Power State #0 00:07:51.290 Power State #0: 00:07:51.290 Max Power: 25.00 W 00:07:51.290 Non-Operational State: Operational 00:07:51.290 Entry Latency: 16 microseconds 00:07:51.290 Exit Latency: 4 microseconds 00:07:51.290 Relative Read Throughput: 0 00:07:51.290 Relative Read Latency: 0 00:07:51.290 Relative Write Throughput: 0 00:07:51.290 Relative Write Latency: 0 00:07:51.290 Idle Power: Not Reported 00:07:51.290 Active Power: Not Reported 00:07:51.290 Non-Operational Permissive Mode: Not Supported 00:07:51.290 00:07:51.290 Health Information 00:07:51.290 ================== 00:07:51.290 Critical Warnings: 00:07:51.290 Available Spare Space: OK 00:07:51.290 Temperature: OK 00:07:51.290 Device Reliability: OK 00:07:51.290 Read Only: No 00:07:51.290 Volatile Memory Backup: OK 00:07:51.290 Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.290 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:51.290 Available Spare: 0% 00:07:51.290 Available Spare Threshold: 0% 00:07:51.290 Life Percentage Used: 0% 00:07:51.290 Data Units Read: 2293 00:07:51.290 Data Units Written: 2080 00:07:51.290 Host Read Commands: 120932 00:07:51.290 Host Write Commands: 119201 00:07:51.290 Controller Busy Time: 0 minutes 00:07:51.290 Power Cycles: 0 00:07:51.290 Power On Hours: 0 hours 00:07:51.290 Unsafe Shutdowns: 0 00:07:51.290 Unrecoverable Media Errors: 0 00:07:51.290 Lifetime Error Log Entries: 0 00:07:51.290 Warning Temperature Time: 0 minutes 00:07:51.290 Critical Temperature Time: 0 minutes 00:07:51.290 00:07:51.290 Number of Queues 00:07:51.290 ================ 00:07:51.290 Number of I/O Submission Queues: 64 00:07:51.290 Number of I/O Completion Queues: 64 00:07:51.290 00:07:51.290 ZNS Specific Controller Data 00:07:51.290 ============================ 00:07:51.290 Zone Append Size Limit: 0 00:07:51.290 00:07:51.290 00:07:51.290 Active Namespaces 00:07:51.290 ================= 00:07:51.290 Namespace ID:1 00:07:51.290 Error Recovery Timeout: Unlimited 00:07:51.290 Command Set Identifier: NVM (00h) 00:07:51.290 Deallocate: Supported 00:07:51.290 Deallocated/Unwritten Error: Supported 00:07:51.290 Deallocated Read Value: All 0x00 00:07:51.290 Deallocate in Write Zeroes: Not Supported 00:07:51.290 Deallocated Guard Field: 0xFFFF 00:07:51.290 Flush: Supported 00:07:51.290 Reservation: Not Supported 00:07:51.290 Namespace Sharing Capabilities: Private 00:07:51.290 Size (in LBAs): 1048576 (4GiB) 00:07:51.290 Capacity (in LBAs): 1048576 (4GiB) 00:07:51.290 Utilization (in LBAs): 1048576 (4GiB) 00:07:51.290 Thin Provisioning: Not Supported 00:07:51.290 Per-NS Atomic Units: No 00:07:51.290 Maximum Single Source Range Length: 128 00:07:51.290 Maximum Copy Length: 128 00:07:51.290 Maximum Source Range Count: 128 00:07:51.290 NGUID/EUI64 Never Reused: No 00:07:51.290 Namespace Write Protected: No 00:07:51.290 Number of LBA Formats: 8 00:07:51.290 Current LBA Format: LBA Format #04 00:07:51.290 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.290 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.290 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.290 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.290 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.290 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.290 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.290 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.290 00:07:51.290 NVM Specific Namespace Data 00:07:51.290 =========================== 00:07:51.290 Logical Block Storage Tag Mask: 0 00:07:51.290 Protection Information Capabilities: 00:07:51.290 16b Guard Protection Information Storage Tag Support: No 00:07:51.290 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.290 Storage Tag Check Read Support: No 00:07:51.290 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.290 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.290 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.290 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.290 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.290 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.290 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.290 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.290 Namespace ID:2 00:07:51.290 Error Recovery Timeout: Unlimited 00:07:51.290 Command Set Identifier: NVM (00h) 00:07:51.290 Deallocate: Supported 00:07:51.290 Deallocated/Unwritten Error: Supported 00:07:51.290 Deallocated Read Value: All 0x00 00:07:51.290 Deallocate in Write Zeroes: Not Supported 00:07:51.290 Deallocated Guard Field: 0xFFFF 00:07:51.290 Flush: Supported 00:07:51.290 Reservation: Not Supported 00:07:51.290 Namespace Sharing Capabilities: Private 00:07:51.290 Size (in LBAs): 1048576 (4GiB) 00:07:51.290 Capacity (in LBAs): 1048576 (4GiB) 00:07:51.290 Utilization (in LBAs): 1048576 (4GiB) 00:07:51.290 Thin Provisioning: Not Supported 00:07:51.290 Per-NS Atomic Units: No 00:07:51.290 Maximum Single Source Range Length: 128 00:07:51.290 Maximum Copy Length: 128 00:07:51.290 Maximum Source Range Count: 128 00:07:51.290 NGUID/EUI64 Never Reused: No 00:07:51.290 Namespace Write Protected: No 00:07:51.290 Number of LBA Formats: 8 00:07:51.290 Current LBA Format: LBA Format #04 00:07:51.290 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.290 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.290 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.290 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.290 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.290 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.290 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.290 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.290 00:07:51.290 NVM Specific Namespace Data 00:07:51.290 =========================== 00:07:51.290 Logical Block Storage Tag Mask: 0 00:07:51.290 Protection Information Capabilities: 00:07:51.290 16b Guard Protection Information Storage Tag Support: No 00:07:51.290 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.290 Storage Tag Check Read Support: No 00:07:51.290 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.290 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.291 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.291 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.291 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.291 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.291 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.291 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.291 Namespace ID:3 00:07:51.291 Error Recovery Timeout: Unlimited 00:07:51.291 Command Set Identifier: NVM (00h) 00:07:51.291 Deallocate: Supported 00:07:51.291 Deallocated/Unwritten Error: Supported 00:07:51.291 Deallocated Read Value: All 0x00 00:07:51.291 Deallocate in Write Zeroes: Not Supported 00:07:51.291 Deallocated Guard Field: 0xFFFF 00:07:51.291 Flush: Supported 00:07:51.291 Reservation: Not Supported 00:07:51.291 Namespace Sharing Capabilities: Private 00:07:51.291 Size (in LBAs): 1048576 (4GiB) 00:07:51.291 Capacity (in LBAs): 1048576 (4GiB) 00:07:51.291 Utilization (in LBAs): 1048576 (4GiB) 00:07:51.291 Thin Provisioning: Not Supported 00:07:51.291 Per-NS Atomic Units: No 00:07:51.291 Maximum Single Source Range Length: 128 00:07:51.291 Maximum Copy Length: 128 00:07:51.291 Maximum Source Range Count: 128 00:07:51.291 NGUID/EUI64 Never Reused: No 00:07:51.291 Namespace Write Protected: No 00:07:51.291 Number of LBA Formats: 8 00:07:51.291 Current LBA Format: LBA Format #04 00:07:51.291 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.291 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.291 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.291 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.291 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.291 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.291 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.291 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.291 00:07:51.291 NVM Specific Namespace Data 00:07:51.291 =========================== 00:07:51.291 Logical Block Storage Tag Mask: 0 00:07:51.291 Protection Information Capabilities: 00:07:51.291 16b Guard Protection Information Storage Tag Support: No 00:07:51.291 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.291 Storage Tag Check Read Support: No 00:07:51.291 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.291 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.291 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.291 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.291 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.291 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.291 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.291 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.291 19:09:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:51.291 19:09:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:51.550 ===================================================== 00:07:51.550 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:51.550 ===================================================== 00:07:51.550 Controller Capabilities/Features 00:07:51.550 ================================ 00:07:51.550 Vendor ID: 1b36 00:07:51.550 Subsystem Vendor ID: 1af4 00:07:51.550 Serial Number: 12343 00:07:51.550 Model Number: QEMU NVMe Ctrl 00:07:51.550 Firmware Version: 8.0.0 00:07:51.550 Recommended Arb Burst: 6 00:07:51.550 IEEE OUI Identifier: 00 54 52 00:07:51.550 Multi-path I/O 00:07:51.550 May have multiple subsystem ports: No 00:07:51.550 May have multiple controllers: Yes 00:07:51.550 Associated with SR-IOV VF: No 00:07:51.550 Max Data Transfer Size: 524288 00:07:51.550 Max Number of Namespaces: 256 00:07:51.550 Max Number of I/O Queues: 64 00:07:51.550 NVMe Specification Version (VS): 1.4 00:07:51.550 NVMe Specification Version (Identify): 1.4 00:07:51.550 Maximum Queue Entries: 2048 00:07:51.550 Contiguous Queues Required: Yes 00:07:51.550 Arbitration Mechanisms Supported 00:07:51.550 Weighted Round Robin: Not Supported 00:07:51.550 Vendor Specific: Not Supported 00:07:51.550 Reset Timeout: 7500 ms 00:07:51.550 Doorbell Stride: 4 bytes 00:07:51.550 NVM Subsystem Reset: Not Supported 00:07:51.550 Command Sets Supported 00:07:51.550 NVM Command Set: Supported 00:07:51.550 Boot Partition: Not Supported 00:07:51.550 Memory Page Size Minimum: 4096 bytes 00:07:51.550 Memory Page Size Maximum: 65536 bytes 00:07:51.550 Persistent Memory Region: Not Supported 00:07:51.550 Optional Asynchronous Events Supported 00:07:51.550 Namespace Attribute Notices: Supported 00:07:51.550 Firmware Activation Notices: Not Supported 00:07:51.550 ANA Change Notices: Not Supported 00:07:51.550 PLE Aggregate Log Change Notices: Not Supported 00:07:51.550 LBA Status Info Alert Notices: Not Supported 00:07:51.550 EGE Aggregate Log Change Notices: Not Supported 00:07:51.550 Normal NVM Subsystem Shutdown event: Not Supported 00:07:51.550 Zone Descriptor Change Notices: Not Supported 00:07:51.550 Discovery Log Change Notices: Not Supported 00:07:51.550 Controller Attributes 00:07:51.550 128-bit Host Identifier: Not Supported 00:07:51.550 Non-Operational Permissive Mode: Not Supported 00:07:51.550 NVM Sets: Not Supported 00:07:51.550 Read Recovery Levels: Not Supported 00:07:51.550 Endurance Groups: Supported 00:07:51.550 Predictable Latency Mode: Not Supported 00:07:51.550 Traffic Based Keep ALive: Not Supported 00:07:51.550 Namespace Granularity: Not Supported 00:07:51.550 SQ Associations: Not Supported 00:07:51.550 UUID List: Not Supported 00:07:51.550 Multi-Domain Subsystem: Not Supported 00:07:51.550 Fixed Capacity Management: Not Supported 00:07:51.550 Variable Capacity Management: Not Supported 00:07:51.550 Delete Endurance Group: Not Supported 00:07:51.550 Delete NVM Set: Not Supported 00:07:51.550 Extended LBA Formats Supported: Supported 00:07:51.550 Flexible Data Placement Supported: Supported 00:07:51.550 00:07:51.550 Controller Memory Buffer Support 00:07:51.550 ================================ 00:07:51.550 Supported: No 00:07:51.550 00:07:51.550 Persistent Memory Region Support 00:07:51.550 ================================ 00:07:51.550 Supported: No 00:07:51.550 00:07:51.550 Admin Command Set Attributes 00:07:51.550 ============================ 00:07:51.550 Security Send/Receive: Not Supported 00:07:51.550 Format NVM: Supported 00:07:51.550 Firmware Activate/Download: Not Supported 00:07:51.550 Namespace Management: Supported 00:07:51.550 Device Self-Test: Not Supported 00:07:51.550 Directives: Supported 00:07:51.550 NVMe-MI: Not Supported 00:07:51.550 Virtualization Management: Not Supported 00:07:51.550 Doorbell Buffer Config: Supported 00:07:51.550 Get LBA Status Capability: Not Supported 00:07:51.550 Command & Feature Lockdown Capability: Not Supported 00:07:51.550 Abort Command Limit: 4 00:07:51.550 Async Event Request Limit: 4 00:07:51.550 Number of Firmware Slots: N/A 00:07:51.550 Firmware Slot 1 Read-Only: N/A 00:07:51.550 Firmware Activation Without Reset: N/A 00:07:51.550 Multiple Update Detection Support: N/A 00:07:51.550 Firmware Update Granularity: No Information Provided 00:07:51.550 Per-Namespace SMART Log: Yes 00:07:51.550 Asymmetric Namespace Access Log Page: Not Supported 00:07:51.550 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:51.550 Command Effects Log Page: Supported 00:07:51.550 Get Log Page Extended Data: Supported 00:07:51.550 Telemetry Log Pages: Not Supported 00:07:51.550 Persistent Event Log Pages: Not Supported 00:07:51.550 Supported Log Pages Log Page: May Support 00:07:51.550 Commands Supported & Effects Log Page: Not Supported 00:07:51.550 Feature Identifiers & Effects Log Page:May Support 00:07:51.550 NVMe-MI Commands & Effects Log Page: May Support 00:07:51.550 Data Area 4 for Telemetry Log: Not Supported 00:07:51.550 Error Log Page Entries Supported: 1 00:07:51.550 Keep Alive: Not Supported 00:07:51.550 00:07:51.550 NVM Command Set Attributes 00:07:51.550 ========================== 00:07:51.550 Submission Queue Entry Size 00:07:51.550 Max: 64 00:07:51.550 Min: 64 00:07:51.550 Completion Queue Entry Size 00:07:51.550 Max: 16 00:07:51.550 Min: 16 00:07:51.550 Number of Namespaces: 256 00:07:51.550 Compare Command: Supported 00:07:51.550 Write Uncorrectable Command: Not Supported 00:07:51.550 Dataset Management Command: Supported 00:07:51.550 Write Zeroes Command: Supported 00:07:51.551 Set Features Save Field: Supported 00:07:51.551 Reservations: Not Supported 00:07:51.551 Timestamp: Supported 00:07:51.551 Copy: Supported 00:07:51.551 Volatile Write Cache: Present 00:07:51.551 Atomic Write Unit (Normal): 1 00:07:51.551 Atomic Write Unit (PFail): 1 00:07:51.551 Atomic Compare & Write Unit: 1 00:07:51.551 Fused Compare & Write: Not Supported 00:07:51.551 Scatter-Gather List 00:07:51.551 SGL Command Set: Supported 00:07:51.551 SGL Keyed: Not Supported 00:07:51.551 SGL Bit Bucket Descriptor: Not Supported 00:07:51.551 SGL Metadata Pointer: Not Supported 00:07:51.551 Oversized SGL: Not Supported 00:07:51.551 SGL Metadata Address: Not Supported 00:07:51.551 SGL Offset: Not Supported 00:07:51.551 Transport SGL Data Block: Not Supported 00:07:51.551 Replay Protected Memory Block: Not Supported 00:07:51.551 00:07:51.551 Firmware Slot Information 00:07:51.551 ========================= 00:07:51.551 Active slot: 1 00:07:51.551 Slot 1 Firmware Revision: 1.0 00:07:51.551 00:07:51.551 00:07:51.551 Commands Supported and Effects 00:07:51.551 ============================== 00:07:51.551 Admin Commands 00:07:51.551 -------------- 00:07:51.551 Delete I/O Submission Queue (00h): Supported 00:07:51.551 Create I/O Submission Queue (01h): Supported 00:07:51.551 Get Log Page (02h): Supported 00:07:51.551 Delete I/O Completion Queue (04h): Supported 00:07:51.551 Create I/O Completion Queue (05h): Supported 00:07:51.551 Identify (06h): Supported 00:07:51.551 Abort (08h): Supported 00:07:51.551 Set Features (09h): Supported 00:07:51.551 Get Features (0Ah): Supported 00:07:51.551 Asynchronous Event Request (0Ch): Supported 00:07:51.551 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:51.551 Directive Send (19h): Supported 00:07:51.551 Directive Receive (1Ah): Supported 00:07:51.551 Virtualization Management (1Ch): Supported 00:07:51.551 Doorbell Buffer Config (7Ch): Supported 00:07:51.551 Format NVM (80h): Supported LBA-Change 00:07:51.551 I/O Commands 00:07:51.551 ------------ 00:07:51.551 Flush (00h): Supported LBA-Change 00:07:51.551 Write (01h): Supported LBA-Change 00:07:51.551 Read (02h): Supported 00:07:51.551 Compare (05h): Supported 00:07:51.551 Write Zeroes (08h): Supported LBA-Change 00:07:51.551 Dataset Management (09h): Supported LBA-Change 00:07:51.551 Unknown (0Ch): Supported 00:07:51.551 Unknown (12h): Supported 00:07:51.551 Copy (19h): Supported LBA-Change 00:07:51.551 Unknown (1Dh): Supported LBA-Change 00:07:51.551 00:07:51.551 Error Log 00:07:51.551 ========= 00:07:51.551 00:07:51.551 Arbitration 00:07:51.551 =========== 00:07:51.551 Arbitration Burst: no limit 00:07:51.551 00:07:51.551 Power Management 00:07:51.551 ================ 00:07:51.551 Number of Power States: 1 00:07:51.551 Current Power State: Power State #0 00:07:51.551 Power State #0: 00:07:51.551 Max Power: 25.00 W 00:07:51.551 Non-Operational State: Operational 00:07:51.551 Entry Latency: 16 microseconds 00:07:51.551 Exit Latency: 4 microseconds 00:07:51.551 Relative Read Throughput: 0 00:07:51.551 Relative Read Latency: 0 00:07:51.551 Relative Write Throughput: 0 00:07:51.551 Relative Write Latency: 0 00:07:51.551 Idle Power: Not Reported 00:07:51.551 Active Power: Not Reported 00:07:51.551 Non-Operational Permissive Mode: Not Supported 00:07:51.551 00:07:51.551 Health Information 00:07:51.551 ================== 00:07:51.551 Critical Warnings: 00:07:51.551 Available Spare Space: OK 00:07:51.551 Temperature: OK 00:07:51.551 Device Reliability: OK 00:07:51.551 Read Only: No 00:07:51.551 Volatile Memory Backup: OK 00:07:51.551 Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.551 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:51.551 Available Spare: 0% 00:07:51.551 Available Spare Threshold: 0% 00:07:51.551 Life Percentage Used: 0% 00:07:51.551 Data Units Read: 1137 00:07:51.551 Data Units Written: 1066 00:07:51.551 Host Read Commands: 43279 00:07:51.551 Host Write Commands: 42702 00:07:51.551 Controller Busy Time: 0 minutes 00:07:51.551 Power Cycles: 0 00:07:51.551 Power On Hours: 0 hours 00:07:51.551 Unsafe Shutdowns: 0 00:07:51.551 Unrecoverable Media Errors: 0 00:07:51.551 Lifetime Error Log Entries: 0 00:07:51.551 Warning Temperature Time: 0 minutes 00:07:51.551 Critical Temperature Time: 0 minutes 00:07:51.551 00:07:51.551 Number of Queues 00:07:51.551 ================ 00:07:51.551 Number of I/O Submission Queues: 64 00:07:51.551 Number of I/O Completion Queues: 64 00:07:51.551 00:07:51.551 ZNS Specific Controller Data 00:07:51.551 ============================ 00:07:51.551 Zone Append Size Limit: 0 00:07:51.551 00:07:51.551 00:07:51.551 Active Namespaces 00:07:51.551 ================= 00:07:51.551 Namespace ID:1 00:07:51.551 Error Recovery Timeout: Unlimited 00:07:51.551 Command Set Identifier: NVM (00h) 00:07:51.551 Deallocate: Supported 00:07:51.551 Deallocated/Unwritten Error: Supported 00:07:51.551 Deallocated Read Value: All 0x00 00:07:51.551 Deallocate in Write Zeroes: Not Supported 00:07:51.551 Deallocated Guard Field: 0xFFFF 00:07:51.551 Flush: Supported 00:07:51.551 Reservation: Not Supported 00:07:51.551 Namespace Sharing Capabilities: Multiple Controllers 00:07:51.551 Size (in LBAs): 262144 (1GiB) 00:07:51.551 Capacity (in LBAs): 262144 (1GiB) 00:07:51.551 Utilization (in LBAs): 262144 (1GiB) 00:07:51.551 Thin Provisioning: Not Supported 00:07:51.551 Per-NS Atomic Units: No 00:07:51.551 Maximum Single Source Range Length: 128 00:07:51.551 Maximum Copy Length: 128 00:07:51.551 Maximum Source Range Count: 128 00:07:51.551 NGUID/EUI64 Never Reused: No 00:07:51.551 Namespace Write Protected: No 00:07:51.551 Endurance group ID: 1 00:07:51.551 Number of LBA Formats: 8 00:07:51.551 Current LBA Format: LBA Format #04 00:07:51.551 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.551 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.551 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.551 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.551 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.551 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.551 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.551 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.551 00:07:51.551 Get Feature FDP: 00:07:51.551 ================ 00:07:51.551 Enabled: Yes 00:07:51.551 FDP configuration index: 0 00:07:51.551 00:07:51.551 FDP configurations log page 00:07:51.551 =========================== 00:07:51.551 Number of FDP configurations: 1 00:07:51.551 Version: 0 00:07:51.551 Size: 112 00:07:51.551 FDP Configuration Descriptor: 0 00:07:51.551 Descriptor Size: 96 00:07:51.551 Reclaim Group Identifier format: 2 00:07:51.551 FDP Volatile Write Cache: Not Present 00:07:51.551 FDP Configuration: Valid 00:07:51.551 Vendor Specific Size: 0 00:07:51.551 Number of Reclaim Groups: 2 00:07:51.551 Number of Recalim Unit Handles: 8 00:07:51.551 Max Placement Identifiers: 128 00:07:51.551 Number of Namespaces Suppprted: 256 00:07:51.551 Reclaim unit Nominal Size: 6000000 bytes 00:07:51.551 Estimated Reclaim Unit Time Limit: Not Reported 00:07:51.551 RUH Desc #000: RUH Type: Initially Isolated 00:07:51.551 RUH Desc #001: RUH Type: Initially Isolated 00:07:51.551 RUH Desc #002: RUH Type: Initially Isolated 00:07:51.551 RUH Desc #003: RUH Type: Initially Isolated 00:07:51.551 RUH Desc #004: RUH Type: Initially Isolated 00:07:51.551 RUH Desc #005: RUH Type: Initially Isolated 00:07:51.551 RUH Desc #006: RUH Type: Initially Isolated 00:07:51.551 RUH Desc #007: RUH Type: Initially Isolated 00:07:51.551 00:07:51.551 FDP reclaim unit handle usage log page 00:07:51.551 ====================================== 00:07:51.551 Number of Reclaim Unit Handles: 8 00:07:51.551 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:51.551 RUH Usage Desc #001: RUH Attributes: Unused 00:07:51.551 RUH Usage Desc #002: RUH Attributes: Unused 00:07:51.551 RUH Usage Desc #003: RUH Attributes: Unused 00:07:51.551 RUH Usage Desc #004: RUH Attributes: Unused 00:07:51.551 RUH Usage Desc #005: RUH Attributes: Unused 00:07:51.551 RUH Usage Desc #006: RUH Attributes: Unused 00:07:51.551 RUH Usage Desc #007: RUH Attributes: Unused 00:07:51.551 00:07:51.551 FDP statistics log page 00:07:51.551 ======================= 00:07:51.551 Host bytes with metadata written: 647471104 00:07:51.551 Media bytes with metadata written: 647573504 00:07:51.551 Media bytes erased: 0 00:07:51.551 00:07:51.551 FDP events log page 00:07:51.551 =================== 00:07:51.551 Number of FDP events: 0 00:07:51.551 00:07:51.551 NVM Specific Namespace Data 00:07:51.551 =========================== 00:07:51.551 Logical Block Storage Tag Mask: 0 00:07:51.551 Protection Information Capabilities: 00:07:51.551 16b Guard Protection Information Storage Tag Support: No 00:07:51.551 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.552 Storage Tag Check Read Support: No 00:07:51.552 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.552 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.552 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.552 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.552 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.552 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.552 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.552 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.552 00:07:51.552 real 0m1.172s 00:07:51.552 user 0m0.425s 00:07:51.552 sys 0m0.539s 00:07:51.552 19:09:35 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.552 19:09:35 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:51.552 ************************************ 00:07:51.552 END TEST nvme_identify 00:07:51.552 ************************************ 00:07:51.552 19:09:35 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:51.552 19:09:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.552 19:09:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.552 19:09:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.552 ************************************ 00:07:51.552 START TEST nvme_perf 00:07:51.552 ************************************ 00:07:51.552 19:09:35 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:51.552 19:09:35 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:52.928 Initializing NVMe Controllers 00:07:52.928 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:52.928 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:52.928 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:52.928 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:52.928 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:52.928 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:52.928 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:52.928 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:52.928 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:52.928 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:52.928 Initialization complete. Launching workers. 00:07:52.928 ======================================================== 00:07:52.928 Latency(us) 00:07:52.928 Device Information : IOPS MiB/s Average min max 00:07:52.928 PCIE (0000:00:10.0) NSID 1 from core 0: 17762.92 208.16 7215.02 5838.24 33495.25 00:07:52.928 PCIE (0000:00:11.0) NSID 1 from core 0: 17762.92 208.16 7205.21 5917.85 31712.84 00:07:52.928 PCIE (0000:00:13.0) NSID 1 from core 0: 17762.92 208.16 7194.23 5926.32 30368.82 00:07:52.928 PCIE (0000:00:12.0) NSID 1 from core 0: 17762.92 208.16 7183.16 5930.25 28613.40 00:07:52.928 PCIE (0000:00:12.0) NSID 2 from core 0: 17762.92 208.16 7172.05 5962.07 26873.05 00:07:52.928 PCIE (0000:00:12.0) NSID 3 from core 0: 17826.82 208.91 7135.33 5877.65 21693.56 00:07:52.928 ======================================================== 00:07:52.928 Total : 106641.43 1249.70 7184.14 5838.24 33495.25 00:07:52.928 00:07:52.928 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:52.928 ================================================================================= 00:07:52.928 1.00000% : 6200.714us 00:07:52.928 10.00000% : 6377.157us 00:07:52.928 25.00000% : 6553.600us 00:07:52.928 50.00000% : 6856.074us 00:07:52.928 75.00000% : 7158.548us 00:07:52.928 90.00000% : 7461.022us 00:07:52.929 95.00000% : 9679.163us 00:07:52.929 98.00000% : 12351.015us 00:07:52.929 99.00000% : 13208.025us 00:07:52.929 99.50000% : 28230.892us 00:07:52.929 99.90000% : 33272.123us 00:07:52.929 99.99000% : 33473.772us 00:07:52.929 99.99900% : 33675.422us 00:07:52.929 99.99990% : 33675.422us 00:07:52.929 99.99999% : 33675.422us 00:07:52.929 00:07:52.929 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:52.929 ================================================================================= 00:07:52.929 1.00000% : 6276.332us 00:07:52.929 10.00000% : 6452.775us 00:07:52.929 25.00000% : 6604.012us 00:07:52.929 50.00000% : 6856.074us 00:07:52.929 75.00000% : 7108.135us 00:07:52.929 90.00000% : 7360.197us 00:07:52.929 95.00000% : 9679.163us 00:07:52.929 98.00000% : 12502.252us 00:07:52.929 99.00000% : 13308.849us 00:07:52.929 99.50000% : 26416.049us 00:07:52.929 99.90000% : 31457.280us 00:07:52.929 99.99000% : 31860.578us 00:07:52.929 99.99900% : 31860.578us 00:07:52.929 99.99990% : 31860.578us 00:07:52.929 99.99999% : 31860.578us 00:07:52.929 00:07:52.929 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:52.929 ================================================================================= 00:07:52.929 1.00000% : 6276.332us 00:07:52.929 10.00000% : 6452.775us 00:07:52.929 25.00000% : 6604.012us 00:07:52.929 50.00000% : 6856.074us 00:07:52.929 75.00000% : 7108.135us 00:07:52.929 90.00000% : 7360.197us 00:07:52.929 95.00000% : 9779.988us 00:07:52.929 98.00000% : 12552.665us 00:07:52.929 99.00000% : 13409.674us 00:07:52.929 99.50000% : 25306.978us 00:07:52.929 99.90000% : 30045.735us 00:07:52.929 99.99000% : 30449.034us 00:07:52.929 99.99900% : 30449.034us 00:07:52.929 99.99990% : 30449.034us 00:07:52.929 99.99999% : 30449.034us 00:07:52.929 00:07:52.929 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:52.929 ================================================================================= 00:07:52.929 1.00000% : 6276.332us 00:07:52.929 10.00000% : 6452.775us 00:07:52.929 25.00000% : 6604.012us 00:07:52.929 50.00000% : 6856.074us 00:07:52.929 75.00000% : 7108.135us 00:07:52.929 90.00000% : 7360.197us 00:07:52.929 95.00000% : 9578.338us 00:07:52.929 98.00000% : 12199.778us 00:07:52.929 99.00000% : 13208.025us 00:07:52.929 99.50000% : 23592.960us 00:07:52.929 99.90000% : 28230.892us 00:07:52.929 99.99000% : 28634.191us 00:07:52.929 99.99900% : 28634.191us 00:07:52.929 99.99990% : 28634.191us 00:07:52.929 99.99999% : 28634.191us 00:07:52.929 00:07:52.929 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:52.929 ================================================================================= 00:07:52.929 1.00000% : 6276.332us 00:07:52.929 10.00000% : 6452.775us 00:07:52.929 25.00000% : 6604.012us 00:07:52.929 50.00000% : 6856.074us 00:07:52.929 75.00000% : 7108.135us 00:07:52.929 90.00000% : 7410.609us 00:07:52.929 95.00000% : 9427.102us 00:07:52.929 98.00000% : 11846.892us 00:07:52.929 99.00000% : 13308.849us 00:07:52.929 99.50000% : 21778.117us 00:07:52.929 99.90000% : 26617.698us 00:07:52.929 99.99000% : 27020.997us 00:07:52.929 99.99900% : 27020.997us 00:07:52.929 99.99990% : 27020.997us 00:07:52.929 99.99999% : 27020.997us 00:07:52.929 00:07:52.929 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:52.929 ================================================================================= 00:07:52.929 1.00000% : 6276.332us 00:07:52.929 10.00000% : 6452.775us 00:07:52.929 25.00000% : 6604.012us 00:07:52.929 50.00000% : 6856.074us 00:07:52.929 75.00000% : 7108.135us 00:07:52.929 90.00000% : 7410.609us 00:07:52.929 95.00000% : 9527.926us 00:07:52.929 98.00000% : 11947.717us 00:07:52.929 99.00000% : 13208.025us 00:07:52.929 99.50000% : 16333.588us 00:07:52.929 99.90000% : 21374.818us 00:07:52.929 99.99000% : 21677.292us 00:07:52.929 99.99900% : 21778.117us 00:07:52.929 99.99990% : 21778.117us 00:07:52.929 99.99999% : 21778.117us 00:07:52.929 00:07:52.929 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:52.929 ============================================================================== 00:07:52.929 Range in us Cumulative IO count 00:07:52.929 5822.622 - 5847.828: 0.0056% ( 1) 00:07:52.929 5847.828 - 5873.034: 0.0281% ( 4) 00:07:52.929 5873.034 - 5898.240: 0.0450% ( 3) 00:07:52.929 5898.240 - 5923.446: 0.0843% ( 7) 00:07:52.929 5923.446 - 5948.652: 0.1293% ( 8) 00:07:52.929 5948.652 - 5973.858: 0.1461% ( 3) 00:07:52.929 5973.858 - 5999.065: 0.1686% ( 4) 00:07:52.929 5999.065 - 6024.271: 0.1855% ( 3) 00:07:52.929 6024.271 - 6049.477: 0.2248% ( 7) 00:07:52.929 6049.477 - 6074.683: 0.2810% ( 10) 00:07:52.929 6074.683 - 6099.889: 0.3260% ( 8) 00:07:52.929 6099.889 - 6125.095: 0.4272% ( 18) 00:07:52.929 6125.095 - 6150.302: 0.5902% ( 29) 00:07:52.929 6150.302 - 6175.508: 0.8599% ( 48) 00:07:52.929 6175.508 - 6200.714: 1.3321% ( 84) 00:07:52.929 6200.714 - 6225.920: 1.9953% ( 118) 00:07:52.929 6225.920 - 6251.126: 2.9508% ( 170) 00:07:52.929 6251.126 - 6276.332: 4.2322% ( 228) 00:07:52.929 6276.332 - 6301.538: 5.6093% ( 245) 00:07:52.929 6301.538 - 6326.745: 7.1886% ( 281) 00:07:52.929 6326.745 - 6351.951: 9.1614% ( 351) 00:07:52.929 6351.951 - 6377.157: 11.0499% ( 336) 00:07:52.929 6377.157 - 6402.363: 13.0002% ( 347) 00:07:52.929 6402.363 - 6427.569: 14.9899% ( 354) 00:07:52.929 6427.569 - 6452.775: 17.0414% ( 365) 00:07:52.929 6452.775 - 6503.188: 21.3073% ( 759) 00:07:52.929 6503.188 - 6553.600: 25.4272% ( 733) 00:07:52.929 6553.600 - 6604.012: 29.7100% ( 762) 00:07:52.929 6604.012 - 6654.425: 33.9478% ( 754) 00:07:52.929 6654.425 - 6704.837: 38.3431% ( 782) 00:07:52.929 6704.837 - 6755.249: 42.7777% ( 789) 00:07:52.929 6755.249 - 6805.662: 47.2403% ( 794) 00:07:52.929 6805.662 - 6856.074: 51.6243% ( 780) 00:07:52.929 6856.074 - 6906.486: 55.9915% ( 777) 00:07:52.929 6906.486 - 6956.898: 60.4148% ( 787) 00:07:52.929 6956.898 - 7007.311: 64.7538% ( 772) 00:07:52.929 7007.311 - 7057.723: 69.0872% ( 771) 00:07:52.929 7057.723 - 7108.135: 73.4937% ( 784) 00:07:52.929 7108.135 - 7158.548: 77.8327% ( 772) 00:07:52.929 7158.548 - 7208.960: 81.6659% ( 682) 00:07:52.929 7208.960 - 7259.372: 84.8584% ( 568) 00:07:52.929 7259.372 - 7309.785: 87.3370% ( 441) 00:07:52.929 7309.785 - 7360.197: 88.8714% ( 273) 00:07:52.929 7360.197 - 7410.609: 89.9224% ( 187) 00:07:52.929 7410.609 - 7461.022: 90.6081% ( 122) 00:07:52.929 7461.022 - 7511.434: 91.0634% ( 81) 00:07:52.929 7511.434 - 7561.846: 91.4062% ( 61) 00:07:52.929 7561.846 - 7612.258: 91.6817% ( 49) 00:07:52.929 7612.258 - 7662.671: 91.8728% ( 34) 00:07:52.929 7662.671 - 7713.083: 92.0920% ( 39) 00:07:52.929 7713.083 - 7763.495: 92.2606% ( 30) 00:07:52.929 7763.495 - 7813.908: 92.3898% ( 23) 00:07:52.929 7813.908 - 7864.320: 92.5135% ( 22) 00:07:52.929 7864.320 - 7914.732: 92.6259% ( 20) 00:07:52.929 7914.732 - 7965.145: 92.7496% ( 22) 00:07:52.929 7965.145 - 8015.557: 92.8395% ( 16) 00:07:52.929 8015.557 - 8065.969: 92.9125% ( 13) 00:07:52.929 8065.969 - 8116.382: 92.9856% ( 13) 00:07:52.929 8116.382 - 8166.794: 93.0587% ( 13) 00:07:52.929 8166.794 - 8217.206: 93.1430% ( 15) 00:07:52.929 8217.206 - 8267.618: 93.1936% ( 9) 00:07:52.929 8267.618 - 8318.031: 93.2779% ( 15) 00:07:52.929 8318.031 - 8368.443: 93.3509% ( 13) 00:07:52.929 8368.443 - 8418.855: 93.4409% ( 16) 00:07:52.929 8418.855 - 8469.268: 93.5308% ( 16) 00:07:52.929 8469.268 - 8519.680: 93.6095% ( 14) 00:07:52.929 8519.680 - 8570.092: 93.6769% ( 12) 00:07:52.929 8570.092 - 8620.505: 93.7556% ( 14) 00:07:52.929 8620.505 - 8670.917: 93.8174% ( 11) 00:07:52.929 8670.917 - 8721.329: 93.8512% ( 6) 00:07:52.929 8721.329 - 8771.742: 93.9130% ( 11) 00:07:52.929 8771.742 - 8822.154: 93.9804% ( 12) 00:07:52.929 8822.154 - 8872.566: 94.0591% ( 14) 00:07:52.930 8872.566 - 8922.978: 94.1153% ( 10) 00:07:52.930 8922.978 - 8973.391: 94.1603% ( 8) 00:07:52.930 8973.391 - 9023.803: 94.2277% ( 12) 00:07:52.930 9023.803 - 9074.215: 94.2952% ( 12) 00:07:52.930 9074.215 - 9124.628: 94.3345% ( 7) 00:07:52.930 9124.628 - 9175.040: 94.4188% ( 15) 00:07:52.930 9175.040 - 9225.452: 94.4863% ( 12) 00:07:52.930 9225.452 - 9275.865: 94.5537% ( 12) 00:07:52.930 9275.865 - 9326.277: 94.6324% ( 14) 00:07:52.930 9326.277 - 9376.689: 94.6942% ( 11) 00:07:52.930 9376.689 - 9427.102: 94.7392% ( 8) 00:07:52.930 9427.102 - 9477.514: 94.7954% ( 10) 00:07:52.930 9477.514 - 9527.926: 94.8404% ( 8) 00:07:52.930 9527.926 - 9578.338: 94.8966% ( 10) 00:07:52.930 9578.338 - 9628.751: 94.9696% ( 13) 00:07:52.930 9628.751 - 9679.163: 95.0202% ( 9) 00:07:52.930 9679.163 - 9729.575: 95.1045% ( 15) 00:07:52.930 9729.575 - 9779.988: 95.1607% ( 10) 00:07:52.930 9779.988 - 9830.400: 95.2113% ( 9) 00:07:52.930 9830.400 - 9880.812: 95.2675% ( 10) 00:07:52.930 9880.812 - 9931.225: 95.3125% ( 8) 00:07:52.930 9931.225 - 9981.637: 95.4024% ( 16) 00:07:52.930 9981.637 - 10032.049: 95.4643% ( 11) 00:07:52.930 10032.049 - 10082.462: 95.5205% ( 10) 00:07:52.930 10082.462 - 10132.874: 95.5767% ( 10) 00:07:52.930 10132.874 - 10183.286: 95.6272% ( 9) 00:07:52.930 10183.286 - 10233.698: 95.7059% ( 14) 00:07:52.930 10233.698 - 10284.111: 95.7453% ( 7) 00:07:52.930 10284.111 - 10334.523: 95.8071% ( 11) 00:07:52.930 10334.523 - 10384.935: 95.8689% ( 11) 00:07:52.930 10384.935 - 10435.348: 95.9476% ( 14) 00:07:52.930 10435.348 - 10485.760: 96.0207% ( 13) 00:07:52.930 10485.760 - 10536.172: 96.1050% ( 15) 00:07:52.930 10536.172 - 10586.585: 96.1724% ( 12) 00:07:52.930 10586.585 - 10636.997: 96.2455% ( 13) 00:07:52.930 10636.997 - 10687.409: 96.3186% ( 13) 00:07:52.930 10687.409 - 10737.822: 96.4141% ( 17) 00:07:52.930 10737.822 - 10788.234: 96.4928% ( 14) 00:07:52.930 10788.234 - 10838.646: 96.5659% ( 13) 00:07:52.930 10838.646 - 10889.058: 96.6389% ( 13) 00:07:52.930 10889.058 - 10939.471: 96.7232% ( 15) 00:07:52.930 10939.471 - 10989.883: 96.7907% ( 12) 00:07:52.930 10989.883 - 11040.295: 96.8469% ( 10) 00:07:52.930 11040.295 - 11090.708: 96.9031% ( 10) 00:07:52.930 11090.708 - 11141.120: 96.9649% ( 11) 00:07:52.930 11141.120 - 11191.532: 97.0268% ( 11) 00:07:52.930 11191.532 - 11241.945: 97.0886% ( 11) 00:07:52.930 11241.945 - 11292.357: 97.1335% ( 8) 00:07:52.930 11292.357 - 11342.769: 97.2010% ( 12) 00:07:52.930 11342.769 - 11393.182: 97.2516% ( 9) 00:07:52.930 11393.182 - 11443.594: 97.2909% ( 7) 00:07:52.930 11443.594 - 11494.006: 97.3134% ( 4) 00:07:52.930 11494.006 - 11544.418: 97.3246% ( 2) 00:07:52.930 11544.418 - 11594.831: 97.3415% ( 3) 00:07:52.930 11594.831 - 11645.243: 97.3865% ( 8) 00:07:52.930 11645.243 - 11695.655: 97.4033% ( 3) 00:07:52.930 11695.655 - 11746.068: 97.4314% ( 5) 00:07:52.930 11746.068 - 11796.480: 97.4764% ( 8) 00:07:52.930 11796.480 - 11846.892: 97.5157% ( 7) 00:07:52.930 11846.892 - 11897.305: 97.5495% ( 6) 00:07:52.930 11897.305 - 11947.717: 97.5944% ( 8) 00:07:52.930 11947.717 - 11998.129: 97.6225% ( 5) 00:07:52.930 11998.129 - 12048.542: 97.6675% ( 8) 00:07:52.930 12048.542 - 12098.954: 97.7574% ( 16) 00:07:52.930 12098.954 - 12149.366: 97.8080% ( 9) 00:07:52.930 12149.366 - 12199.778: 97.8867% ( 14) 00:07:52.930 12199.778 - 12250.191: 97.9598% ( 13) 00:07:52.930 12250.191 - 12300.603: 97.9991% ( 7) 00:07:52.930 12300.603 - 12351.015: 98.0553% ( 10) 00:07:52.930 12351.015 - 12401.428: 98.1677% ( 20) 00:07:52.930 12401.428 - 12451.840: 98.2352% ( 12) 00:07:52.930 12451.840 - 12502.252: 98.2857% ( 9) 00:07:52.930 12502.252 - 12552.665: 98.3644% ( 14) 00:07:52.930 12552.665 - 12603.077: 98.4319% ( 12) 00:07:52.930 12603.077 - 12653.489: 98.4993% ( 12) 00:07:52.930 12653.489 - 12703.902: 98.5780% ( 14) 00:07:52.930 12703.902 - 12754.314: 98.6455% ( 12) 00:07:52.930 12754.314 - 12804.726: 98.7017% ( 10) 00:07:52.930 12804.726 - 12855.138: 98.7522% ( 9) 00:07:52.930 12855.138 - 12905.551: 98.8141% ( 11) 00:07:52.930 12905.551 - 13006.375: 98.9265% ( 20) 00:07:52.930 13006.375 - 13107.200: 98.9883% ( 11) 00:07:52.930 13107.200 - 13208.025: 99.0277% ( 7) 00:07:52.930 13208.025 - 13308.849: 99.0670% ( 7) 00:07:52.930 13308.849 - 13409.674: 99.1288% ( 11) 00:07:52.930 13409.674 - 13510.498: 99.1682% ( 7) 00:07:52.930 13510.498 - 13611.323: 99.1738% ( 1) 00:07:52.930 13611.323 - 13712.148: 99.1906% ( 3) 00:07:52.930 13712.148 - 13812.972: 99.1963% ( 1) 00:07:52.930 13812.972 - 13913.797: 99.2188% ( 4) 00:07:52.930 13913.797 - 14014.622: 99.2300% ( 2) 00:07:52.930 14014.622 - 14115.446: 99.2412% ( 2) 00:07:52.930 14115.446 - 14216.271: 99.2637% ( 4) 00:07:52.930 14216.271 - 14317.095: 99.2693% ( 1) 00:07:52.930 14317.095 - 14417.920: 99.2806% ( 2) 00:07:52.930 27020.997 - 27222.646: 99.3143% ( 6) 00:07:52.930 27222.646 - 27424.295: 99.3649% ( 9) 00:07:52.930 27424.295 - 27625.945: 99.4042% ( 7) 00:07:52.930 27625.945 - 27827.594: 99.4436% ( 7) 00:07:52.930 27827.594 - 28029.243: 99.4885% ( 8) 00:07:52.930 28029.243 - 28230.892: 99.5335% ( 8) 00:07:52.930 28230.892 - 28432.542: 99.5785% ( 8) 00:07:52.930 28432.542 - 28634.191: 99.6290% ( 9) 00:07:52.930 28634.191 - 28835.840: 99.6403% ( 2) 00:07:52.930 31860.578 - 32062.228: 99.6796% ( 7) 00:07:52.930 32062.228 - 32263.877: 99.7246% ( 8) 00:07:52.930 32263.877 - 32465.526: 99.7696% ( 8) 00:07:52.930 32465.526 - 32667.175: 99.8089% ( 7) 00:07:52.930 32667.175 - 32868.825: 99.8595% ( 9) 00:07:52.930 32868.825 - 33070.474: 99.8988% ( 7) 00:07:52.930 33070.474 - 33272.123: 99.9494% ( 9) 00:07:52.930 33272.123 - 33473.772: 99.9944% ( 8) 00:07:52.930 33473.772 - 33675.422: 100.0000% ( 1) 00:07:52.930 00:07:52.930 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:52.930 ============================================================================== 00:07:52.930 Range in us Cumulative IO count 00:07:52.930 5898.240 - 5923.446: 0.0112% ( 2) 00:07:52.930 5923.446 - 5948.652: 0.0337% ( 4) 00:07:52.930 5948.652 - 5973.858: 0.0562% ( 4) 00:07:52.930 5973.858 - 5999.065: 0.0787% ( 4) 00:07:52.930 5999.065 - 6024.271: 0.0899% ( 2) 00:07:52.930 6024.271 - 6049.477: 0.1180% ( 5) 00:07:52.930 6049.477 - 6074.683: 0.1574% ( 7) 00:07:52.930 6074.683 - 6099.889: 0.2080% ( 9) 00:07:52.930 6099.889 - 6125.095: 0.2585% ( 9) 00:07:52.930 6125.095 - 6150.302: 0.3091% ( 9) 00:07:52.930 6150.302 - 6175.508: 0.3597% ( 9) 00:07:52.930 6175.508 - 6200.714: 0.4440% ( 15) 00:07:52.930 6200.714 - 6225.920: 0.6126% ( 30) 00:07:52.930 6225.920 - 6251.126: 0.8937% ( 50) 00:07:52.930 6251.126 - 6276.332: 1.3883% ( 88) 00:07:52.930 6276.332 - 6301.538: 2.0683% ( 121) 00:07:52.930 6301.538 - 6326.745: 3.0013% ( 166) 00:07:52.930 6326.745 - 6351.951: 4.3784% ( 245) 00:07:52.930 6351.951 - 6377.157: 5.8734% ( 266) 00:07:52.930 6377.157 - 6402.363: 7.6270% ( 312) 00:07:52.930 6402.363 - 6427.569: 9.8078% ( 388) 00:07:52.930 6427.569 - 6452.775: 12.0279% ( 395) 00:07:52.930 6452.775 - 6503.188: 16.7828% ( 846) 00:07:52.930 6503.188 - 6553.600: 21.8469% ( 901) 00:07:52.930 6553.600 - 6604.012: 26.7705% ( 876) 00:07:52.930 6604.012 - 6654.425: 31.9975% ( 930) 00:07:52.930 6654.425 - 6704.837: 37.1009% ( 908) 00:07:52.930 6704.837 - 6755.249: 42.3955% ( 942) 00:07:52.930 6755.249 - 6805.662: 47.4933% ( 907) 00:07:52.930 6805.662 - 6856.074: 52.7203% ( 930) 00:07:52.930 6856.074 - 6906.486: 57.9699% ( 934) 00:07:52.930 6906.486 - 6956.898: 63.2475% ( 939) 00:07:52.930 6956.898 - 7007.311: 68.5308% ( 940) 00:07:52.930 7007.311 - 7057.723: 73.6848% ( 917) 00:07:52.930 7057.723 - 7108.135: 78.4229% ( 843) 00:07:52.930 7108.135 - 7158.548: 82.5596% ( 736) 00:07:52.930 7158.548 - 7208.960: 85.7183% ( 562) 00:07:52.930 7208.960 - 7259.372: 87.8878% ( 386) 00:07:52.930 7259.372 - 7309.785: 89.3267% ( 256) 00:07:52.930 7309.785 - 7360.197: 90.2091% ( 157) 00:07:52.930 7360.197 - 7410.609: 90.8161% ( 108) 00:07:52.930 7410.609 - 7461.022: 91.2489% ( 77) 00:07:52.930 7461.022 - 7511.434: 91.5861% ( 60) 00:07:52.930 7511.434 - 7561.846: 91.8784% ( 52) 00:07:52.931 7561.846 - 7612.258: 92.1369% ( 46) 00:07:52.931 7612.258 - 7662.671: 92.3224% ( 33) 00:07:52.931 7662.671 - 7713.083: 92.4460% ( 22) 00:07:52.931 7713.083 - 7763.495: 92.5753% ( 23) 00:07:52.931 7763.495 - 7813.908: 92.7102% ( 24) 00:07:52.931 7813.908 - 7864.320: 92.8001% ( 16) 00:07:52.931 7864.320 - 7914.732: 92.8788% ( 14) 00:07:52.931 7914.732 - 7965.145: 92.9406% ( 11) 00:07:52.931 7965.145 - 8015.557: 92.9800% ( 7) 00:07:52.931 8015.557 - 8065.969: 93.0250% ( 8) 00:07:52.931 8065.969 - 8116.382: 93.0643% ( 7) 00:07:52.931 8116.382 - 8166.794: 93.1036% ( 7) 00:07:52.931 8166.794 - 8217.206: 93.1486% ( 8) 00:07:52.931 8217.206 - 8267.618: 93.1879% ( 7) 00:07:52.931 8267.618 - 8318.031: 93.2161% ( 5) 00:07:52.931 8318.031 - 8368.443: 93.2498% ( 6) 00:07:52.931 8368.443 - 8418.855: 93.2779% ( 5) 00:07:52.931 8418.855 - 8469.268: 93.3060% ( 5) 00:07:52.931 8469.268 - 8519.680: 93.3341% ( 5) 00:07:52.931 8519.680 - 8570.092: 93.3622% ( 5) 00:07:52.931 8570.092 - 8620.505: 93.3903% ( 5) 00:07:52.931 8620.505 - 8670.917: 93.4071% ( 3) 00:07:52.931 8670.917 - 8721.329: 93.4184% ( 2) 00:07:52.931 8721.329 - 8771.742: 93.4465% ( 5) 00:07:52.931 8771.742 - 8822.154: 93.4690% ( 4) 00:07:52.931 8822.154 - 8872.566: 93.5083% ( 7) 00:07:52.931 8872.566 - 8922.978: 93.5533% ( 8) 00:07:52.931 8922.978 - 8973.391: 93.6432% ( 16) 00:07:52.931 8973.391 - 9023.803: 93.7219% ( 14) 00:07:52.931 9023.803 - 9074.215: 93.8006% ( 14) 00:07:52.931 9074.215 - 9124.628: 93.9130% ( 20) 00:07:52.931 9124.628 - 9175.040: 93.9917% ( 14) 00:07:52.931 9175.040 - 9225.452: 94.0535% ( 11) 00:07:52.931 9225.452 - 9275.865: 94.1097% ( 10) 00:07:52.931 9275.865 - 9326.277: 94.2277% ( 21) 00:07:52.931 9326.277 - 9376.689: 94.3402% ( 20) 00:07:52.931 9376.689 - 9427.102: 94.4469% ( 19) 00:07:52.931 9427.102 - 9477.514: 94.5369% ( 16) 00:07:52.931 9477.514 - 9527.926: 94.6324% ( 17) 00:07:52.931 9527.926 - 9578.338: 94.7448% ( 20) 00:07:52.931 9578.338 - 9628.751: 94.8741% ( 23) 00:07:52.931 9628.751 - 9679.163: 95.0146% ( 25) 00:07:52.931 9679.163 - 9729.575: 95.1495% ( 24) 00:07:52.931 9729.575 - 9779.988: 95.2619% ( 20) 00:07:52.931 9779.988 - 9830.400: 95.3743% ( 20) 00:07:52.931 9830.400 - 9880.812: 95.4811% ( 19) 00:07:52.931 9880.812 - 9931.225: 95.5767% ( 17) 00:07:52.931 9931.225 - 9981.637: 95.6722% ( 17) 00:07:52.931 9981.637 - 10032.049: 95.7678% ( 17) 00:07:52.931 10032.049 - 10082.462: 95.8802% ( 20) 00:07:52.931 10082.462 - 10132.874: 95.9982% ( 21) 00:07:52.931 10132.874 - 10183.286: 96.0881% ( 16) 00:07:52.931 10183.286 - 10233.698: 96.1724% ( 15) 00:07:52.931 10233.698 - 10284.111: 96.2567% ( 15) 00:07:52.931 10284.111 - 10334.523: 96.3411% ( 15) 00:07:52.931 10334.523 - 10384.935: 96.4197% ( 14) 00:07:52.931 10384.935 - 10435.348: 96.4872% ( 12) 00:07:52.931 10435.348 - 10485.760: 96.5321% ( 8) 00:07:52.931 10485.760 - 10536.172: 96.5603% ( 5) 00:07:52.931 10536.172 - 10586.585: 96.5940% ( 6) 00:07:52.931 10586.585 - 10636.997: 96.6446% ( 9) 00:07:52.931 10636.997 - 10687.409: 96.6951% ( 9) 00:07:52.931 10687.409 - 10737.822: 96.7345% ( 7) 00:07:52.931 10737.822 - 10788.234: 96.7795% ( 8) 00:07:52.931 10788.234 - 10838.646: 96.8188% ( 7) 00:07:52.931 10838.646 - 10889.058: 96.8638% ( 8) 00:07:52.931 10889.058 - 10939.471: 96.9087% ( 8) 00:07:52.931 10939.471 - 10989.883: 96.9424% ( 6) 00:07:52.931 10989.883 - 11040.295: 96.9649% ( 4) 00:07:52.931 11040.295 - 11090.708: 96.9874% ( 4) 00:07:52.931 11090.708 - 11141.120: 97.0099% ( 4) 00:07:52.931 11141.120 - 11191.532: 97.0324% ( 4) 00:07:52.931 11191.532 - 11241.945: 97.0549% ( 4) 00:07:52.931 11241.945 - 11292.357: 97.0717% ( 3) 00:07:52.931 11292.357 - 11342.769: 97.0942% ( 4) 00:07:52.931 11342.769 - 11393.182: 97.1167% ( 4) 00:07:52.931 11393.182 - 11443.594: 97.1392% ( 4) 00:07:52.931 11443.594 - 11494.006: 97.1616% ( 4) 00:07:52.931 11494.006 - 11544.418: 97.1841% ( 4) 00:07:52.931 11544.418 - 11594.831: 97.2066% ( 4) 00:07:52.931 11594.831 - 11645.243: 97.2347% ( 5) 00:07:52.931 11645.243 - 11695.655: 97.2572% ( 4) 00:07:52.931 11695.655 - 11746.068: 97.2797% ( 4) 00:07:52.931 11746.068 - 11796.480: 97.3190% ( 7) 00:07:52.931 11796.480 - 11846.892: 97.3640% ( 8) 00:07:52.931 11846.892 - 11897.305: 97.4033% ( 7) 00:07:52.931 11897.305 - 11947.717: 97.4483% ( 8) 00:07:52.931 11947.717 - 11998.129: 97.5045% ( 10) 00:07:52.931 11998.129 - 12048.542: 97.5382% ( 6) 00:07:52.931 12048.542 - 12098.954: 97.5663% ( 5) 00:07:52.931 12098.954 - 12149.366: 97.5944% ( 5) 00:07:52.931 12149.366 - 12199.778: 97.6450% ( 9) 00:07:52.931 12199.778 - 12250.191: 97.6956% ( 9) 00:07:52.931 12250.191 - 12300.603: 97.7855% ( 16) 00:07:52.931 12300.603 - 12351.015: 97.8698% ( 15) 00:07:52.931 12351.015 - 12401.428: 97.9260% ( 10) 00:07:52.931 12401.428 - 12451.840: 97.9879% ( 11) 00:07:52.931 12451.840 - 12502.252: 98.0384% ( 9) 00:07:52.931 12502.252 - 12552.665: 98.1115% ( 13) 00:07:52.931 12552.665 - 12603.077: 98.1733% ( 11) 00:07:52.931 12603.077 - 12653.489: 98.2520% ( 14) 00:07:52.931 12653.489 - 12703.902: 98.3420% ( 16) 00:07:52.931 12703.902 - 12754.314: 98.4094% ( 12) 00:07:52.931 12754.314 - 12804.726: 98.4712% ( 11) 00:07:52.931 12804.726 - 12855.138: 98.5218% ( 9) 00:07:52.931 12855.138 - 12905.551: 98.5780% ( 10) 00:07:52.931 12905.551 - 13006.375: 98.7073% ( 23) 00:07:52.931 13006.375 - 13107.200: 98.8309% ( 22) 00:07:52.931 13107.200 - 13208.025: 98.9265% ( 17) 00:07:52.931 13208.025 - 13308.849: 99.0389% ( 20) 00:07:52.931 13308.849 - 13409.674: 99.1176% ( 14) 00:07:52.931 13409.674 - 13510.498: 99.1682% ( 9) 00:07:52.931 13510.498 - 13611.323: 99.1906% ( 4) 00:07:52.931 13611.323 - 13712.148: 99.2075% ( 3) 00:07:52.931 13712.148 - 13812.972: 99.2244% ( 3) 00:07:52.931 13812.972 - 13913.797: 99.2356% ( 2) 00:07:52.931 13913.797 - 14014.622: 99.2525% ( 3) 00:07:52.931 14014.622 - 14115.446: 99.2693% ( 3) 00:07:52.931 14115.446 - 14216.271: 99.2806% ( 2) 00:07:52.931 25407.803 - 25508.628: 99.2918% ( 2) 00:07:52.931 25508.628 - 25609.452: 99.3143% ( 4) 00:07:52.931 25609.452 - 25710.277: 99.3368% ( 4) 00:07:52.931 25710.277 - 25811.102: 99.3593% ( 4) 00:07:52.931 25811.102 - 26012.751: 99.4098% ( 9) 00:07:52.931 26012.751 - 26214.400: 99.4548% ( 8) 00:07:52.931 26214.400 - 26416.049: 99.5054% ( 9) 00:07:52.931 26416.049 - 26617.698: 99.5504% ( 8) 00:07:52.931 26617.698 - 26819.348: 99.6009% ( 9) 00:07:52.931 26819.348 - 27020.997: 99.6403% ( 7) 00:07:52.931 30045.735 - 30247.385: 99.6459% ( 1) 00:07:52.931 30247.385 - 30449.034: 99.6965% ( 9) 00:07:52.931 30449.034 - 30650.683: 99.7415% ( 8) 00:07:52.931 30650.683 - 30852.332: 99.7920% ( 9) 00:07:52.931 30852.332 - 31053.982: 99.8370% ( 8) 00:07:52.931 31053.982 - 31255.631: 99.8820% ( 8) 00:07:52.931 31255.631 - 31457.280: 99.9326% ( 9) 00:07:52.931 31457.280 - 31658.929: 99.9831% ( 9) 00:07:52.931 31658.929 - 31860.578: 100.0000% ( 3) 00:07:52.931 00:07:52.931 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:52.931 ============================================================================== 00:07:52.931 Range in us Cumulative IO count 00:07:52.931 5923.446 - 5948.652: 0.0112% ( 2) 00:07:52.931 5948.652 - 5973.858: 0.0225% ( 2) 00:07:52.931 5973.858 - 5999.065: 0.0618% ( 7) 00:07:52.931 5999.065 - 6024.271: 0.0843% ( 4) 00:07:52.931 6024.271 - 6049.477: 0.1293% ( 8) 00:07:52.931 6049.477 - 6074.683: 0.1686% ( 7) 00:07:52.931 6074.683 - 6099.889: 0.2248% ( 10) 00:07:52.931 6099.889 - 6125.095: 0.2698% ( 8) 00:07:52.931 6125.095 - 6150.302: 0.3372% ( 12) 00:07:52.931 6150.302 - 6175.508: 0.4272% ( 16) 00:07:52.931 6175.508 - 6200.714: 0.5283% ( 18) 00:07:52.931 6200.714 - 6225.920: 0.6688% ( 25) 00:07:52.931 6225.920 - 6251.126: 0.9049% ( 42) 00:07:52.931 6251.126 - 6276.332: 1.3602% ( 81) 00:07:52.931 6276.332 - 6301.538: 2.0515% ( 123) 00:07:52.931 6301.538 - 6326.745: 3.0857% ( 184) 00:07:52.931 6326.745 - 6351.951: 4.4290% ( 239) 00:07:52.931 6351.951 - 6377.157: 5.9746% ( 275) 00:07:52.931 6377.157 - 6402.363: 7.8743% ( 338) 00:07:52.931 6402.363 - 6427.569: 9.8921% ( 359) 00:07:52.931 6427.569 - 6452.775: 12.1909% ( 409) 00:07:52.931 6452.775 - 6503.188: 16.8952% ( 837) 00:07:52.931 6503.188 - 6553.600: 21.9087% ( 892) 00:07:52.931 6553.600 - 6604.012: 26.8323% ( 876) 00:07:52.931 6604.012 - 6654.425: 32.0088% ( 921) 00:07:52.931 6654.425 - 6704.837: 37.3426% ( 949) 00:07:52.931 6704.837 - 6755.249: 42.5585% ( 928) 00:07:52.931 6755.249 - 6805.662: 47.8586% ( 943) 00:07:52.931 6805.662 - 6856.074: 53.1587% ( 943) 00:07:52.931 6856.074 - 6906.486: 58.3858% ( 930) 00:07:52.931 6906.486 - 6956.898: 63.6578% ( 938) 00:07:52.931 6956.898 - 7007.311: 68.9299% ( 938) 00:07:52.931 7007.311 - 7057.723: 74.0614% ( 913) 00:07:52.931 7057.723 - 7108.135: 78.8163% ( 846) 00:07:52.931 7108.135 - 7158.548: 83.0036% ( 745) 00:07:52.931 7158.548 - 7208.960: 86.2129% ( 571) 00:07:52.931 7208.960 - 7259.372: 88.3543% ( 381) 00:07:52.931 7259.372 - 7309.785: 89.6639% ( 233) 00:07:52.931 7309.785 - 7360.197: 90.5576% ( 159) 00:07:52.931 7360.197 - 7410.609: 91.1084% ( 98) 00:07:52.931 7410.609 - 7461.022: 91.4962% ( 69) 00:07:52.931 7461.022 - 7511.434: 91.8165% ( 57) 00:07:52.931 7511.434 - 7561.846: 92.0976% ( 50) 00:07:52.931 7561.846 - 7612.258: 92.3055% ( 37) 00:07:52.931 7612.258 - 7662.671: 92.4685% ( 29) 00:07:52.931 7662.671 - 7713.083: 92.5922% ( 22) 00:07:52.931 7713.083 - 7763.495: 92.6877% ( 17) 00:07:52.932 7763.495 - 7813.908: 92.7664% ( 14) 00:07:52.932 7813.908 - 7864.320: 92.8451% ( 14) 00:07:52.932 7864.320 - 7914.732: 92.9069% ( 11) 00:07:52.932 7914.732 - 7965.145: 92.9631% ( 10) 00:07:52.932 7965.145 - 8015.557: 93.0025% ( 7) 00:07:52.932 8015.557 - 8065.969: 93.0362% ( 6) 00:07:52.932 8065.969 - 8116.382: 93.0812% ( 8) 00:07:52.932 8116.382 - 8166.794: 93.1261% ( 8) 00:07:52.932 8166.794 - 8217.206: 93.1655% ( 7) 00:07:52.932 8217.206 - 8267.618: 93.1936% ( 5) 00:07:52.932 8267.618 - 8318.031: 93.2217% ( 5) 00:07:52.932 8318.031 - 8368.443: 93.2498% ( 5) 00:07:52.932 8368.443 - 8418.855: 93.2779% ( 5) 00:07:52.932 8418.855 - 8469.268: 93.3060% ( 5) 00:07:52.932 8469.268 - 8519.680: 93.3285% ( 4) 00:07:52.932 8519.680 - 8570.092: 93.3397% ( 2) 00:07:52.932 8570.092 - 8620.505: 93.3509% ( 2) 00:07:52.932 8620.505 - 8670.917: 93.3790% ( 5) 00:07:52.932 8670.917 - 8721.329: 93.4240% ( 8) 00:07:52.932 8721.329 - 8771.742: 93.4465% ( 4) 00:07:52.932 8771.742 - 8822.154: 93.4858% ( 7) 00:07:52.932 8822.154 - 8872.566: 93.5364% ( 9) 00:07:52.932 8872.566 - 8922.978: 93.5870% ( 9) 00:07:52.932 8922.978 - 8973.391: 93.6488% ( 11) 00:07:52.932 8973.391 - 9023.803: 93.6938% ( 8) 00:07:52.932 9023.803 - 9074.215: 93.7500% ( 10) 00:07:52.932 9074.215 - 9124.628: 93.8174% ( 12) 00:07:52.932 9124.628 - 9175.040: 93.9242% ( 19) 00:07:52.932 9175.040 - 9225.452: 93.9973% ( 13) 00:07:52.932 9225.452 - 9275.865: 94.0872% ( 16) 00:07:52.932 9275.865 - 9326.277: 94.1940% ( 19) 00:07:52.932 9326.277 - 9376.689: 94.2952% ( 18) 00:07:52.932 9376.689 - 9427.102: 94.3964% ( 18) 00:07:52.932 9427.102 - 9477.514: 94.4807% ( 15) 00:07:52.932 9477.514 - 9527.926: 94.5818% ( 18) 00:07:52.932 9527.926 - 9578.338: 94.6830% ( 18) 00:07:52.932 9578.338 - 9628.751: 94.7898% ( 19) 00:07:52.932 9628.751 - 9679.163: 94.8966% ( 19) 00:07:52.932 9679.163 - 9729.575: 94.9921% ( 17) 00:07:52.932 9729.575 - 9779.988: 95.0764% ( 15) 00:07:52.932 9779.988 - 9830.400: 95.1607% ( 15) 00:07:52.932 9830.400 - 9880.812: 95.2451% ( 15) 00:07:52.932 9880.812 - 9931.225: 95.3631% ( 21) 00:07:52.932 9931.225 - 9981.637: 95.4586% ( 17) 00:07:52.932 9981.637 - 10032.049: 95.5598% ( 18) 00:07:52.932 10032.049 - 10082.462: 95.6778% ( 21) 00:07:52.932 10082.462 - 10132.874: 95.7734% ( 17) 00:07:52.932 10132.874 - 10183.286: 95.8970% ( 22) 00:07:52.932 10183.286 - 10233.698: 95.9926% ( 17) 00:07:52.932 10233.698 - 10284.111: 96.0825% ( 16) 00:07:52.932 10284.111 - 10334.523: 96.1837% ( 18) 00:07:52.932 10334.523 - 10384.935: 96.2567% ( 13) 00:07:52.932 10384.935 - 10435.348: 96.3298% ( 13) 00:07:52.932 10435.348 - 10485.760: 96.3973% ( 12) 00:07:52.932 10485.760 - 10536.172: 96.4591% ( 11) 00:07:52.932 10536.172 - 10586.585: 96.5209% ( 11) 00:07:52.932 10586.585 - 10636.997: 96.5715% ( 9) 00:07:52.932 10636.997 - 10687.409: 96.6277% ( 10) 00:07:52.932 10687.409 - 10737.822: 96.6783% ( 9) 00:07:52.932 10737.822 - 10788.234: 96.7289% ( 9) 00:07:52.932 10788.234 - 10838.646: 96.7907% ( 11) 00:07:52.932 10838.646 - 10889.058: 96.8357% ( 8) 00:07:52.932 10889.058 - 10939.471: 96.8750% ( 7) 00:07:52.932 10939.471 - 10989.883: 96.8919% ( 3) 00:07:52.932 10989.883 - 11040.295: 96.9200% ( 5) 00:07:52.932 11040.295 - 11090.708: 96.9537% ( 6) 00:07:52.932 11090.708 - 11141.120: 96.9818% ( 5) 00:07:52.932 11141.120 - 11191.532: 97.0155% ( 6) 00:07:52.932 11191.532 - 11241.945: 97.0492% ( 6) 00:07:52.932 11241.945 - 11292.357: 97.0830% ( 6) 00:07:52.932 11292.357 - 11342.769: 97.1167% ( 6) 00:07:52.932 11342.769 - 11393.182: 97.1504% ( 6) 00:07:52.932 11393.182 - 11443.594: 97.1897% ( 7) 00:07:52.932 11443.594 - 11494.006: 97.2235% ( 6) 00:07:52.932 11494.006 - 11544.418: 97.2628% ( 7) 00:07:52.932 11544.418 - 11594.831: 97.2965% ( 6) 00:07:52.932 11594.831 - 11645.243: 97.3359% ( 7) 00:07:52.932 11645.243 - 11695.655: 97.3640% ( 5) 00:07:52.932 11695.655 - 11746.068: 97.3865% ( 4) 00:07:52.932 11746.068 - 11796.480: 97.3977% ( 2) 00:07:52.932 11796.480 - 11846.892: 97.4089% ( 2) 00:07:52.932 11846.892 - 11897.305: 97.4539% ( 8) 00:07:52.932 11897.305 - 11947.717: 97.4876% ( 6) 00:07:52.932 11947.717 - 11998.129: 97.5214% ( 6) 00:07:52.932 11998.129 - 12048.542: 97.5663% ( 8) 00:07:52.932 12048.542 - 12098.954: 97.6281% ( 11) 00:07:52.932 12098.954 - 12149.366: 97.6619% ( 6) 00:07:52.932 12149.366 - 12199.778: 97.7125% ( 9) 00:07:52.932 12199.778 - 12250.191: 97.7630% ( 9) 00:07:52.932 12250.191 - 12300.603: 97.8024% ( 7) 00:07:52.932 12300.603 - 12351.015: 97.8530% ( 9) 00:07:52.932 12351.015 - 12401.428: 97.8979% ( 8) 00:07:52.932 12401.428 - 12451.840: 97.9485% ( 9) 00:07:52.932 12451.840 - 12502.252: 97.9935% ( 8) 00:07:52.932 12502.252 - 12552.665: 98.0441% ( 9) 00:07:52.932 12552.665 - 12603.077: 98.0946% ( 9) 00:07:52.932 12603.077 - 12653.489: 98.1902% ( 17) 00:07:52.932 12653.489 - 12703.902: 98.2801% ( 16) 00:07:52.932 12703.902 - 12754.314: 98.3420% ( 11) 00:07:52.932 12754.314 - 12804.726: 98.4094% ( 12) 00:07:52.932 12804.726 - 12855.138: 98.4768% ( 12) 00:07:52.932 12855.138 - 12905.551: 98.5443% ( 12) 00:07:52.932 12905.551 - 13006.375: 98.6792% ( 24) 00:07:52.932 13006.375 - 13107.200: 98.8028% ( 22) 00:07:52.932 13107.200 - 13208.025: 98.8928% ( 16) 00:07:52.932 13208.025 - 13308.849: 98.9883% ( 17) 00:07:52.932 13308.849 - 13409.674: 99.0895% ( 18) 00:07:52.932 13409.674 - 13510.498: 99.1625% ( 13) 00:07:52.932 13510.498 - 13611.323: 99.1906% ( 5) 00:07:52.932 13611.323 - 13712.148: 99.2075% ( 3) 00:07:52.932 13712.148 - 13812.972: 99.2188% ( 2) 00:07:52.932 13812.972 - 13913.797: 99.2356% ( 3) 00:07:52.932 13913.797 - 14014.622: 99.2525% ( 3) 00:07:52.932 14014.622 - 14115.446: 99.2693% ( 3) 00:07:52.932 14115.446 - 14216.271: 99.2806% ( 2) 00:07:52.932 24197.908 - 24298.732: 99.2862% ( 1) 00:07:52.932 24298.732 - 24399.557: 99.3087% ( 4) 00:07:52.932 24399.557 - 24500.382: 99.3312% ( 4) 00:07:52.932 24500.382 - 24601.206: 99.3536% ( 4) 00:07:52.932 24601.206 - 24702.031: 99.3761% ( 4) 00:07:52.932 24702.031 - 24802.855: 99.4042% ( 5) 00:07:52.932 24802.855 - 24903.680: 99.4267% ( 4) 00:07:52.932 24903.680 - 25004.505: 99.4492% ( 4) 00:07:52.932 25004.505 - 25105.329: 99.4717% ( 4) 00:07:52.932 25105.329 - 25206.154: 99.4998% ( 5) 00:07:52.932 25206.154 - 25306.978: 99.5223% ( 4) 00:07:52.932 25306.978 - 25407.803: 99.5447% ( 4) 00:07:52.932 25407.803 - 25508.628: 99.5672% ( 4) 00:07:52.932 25508.628 - 25609.452: 99.5897% ( 4) 00:07:52.932 25609.452 - 25710.277: 99.6122% ( 4) 00:07:52.932 25710.277 - 25811.102: 99.6403% ( 5) 00:07:52.932 28835.840 - 29037.489: 99.6909% ( 9) 00:07:52.932 29037.489 - 29239.138: 99.7358% ( 8) 00:07:52.932 29239.138 - 29440.788: 99.7808% ( 8) 00:07:52.932 29440.788 - 29642.437: 99.8258% ( 8) 00:07:52.932 29642.437 - 29844.086: 99.8763% ( 9) 00:07:52.932 29844.086 - 30045.735: 99.9213% ( 8) 00:07:52.932 30045.735 - 30247.385: 99.9663% ( 8) 00:07:52.932 30247.385 - 30449.034: 100.0000% ( 6) 00:07:52.932 00:07:52.932 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:52.932 ============================================================================== 00:07:52.932 Range in us Cumulative IO count 00:07:52.932 5923.446 - 5948.652: 0.0337% ( 6) 00:07:52.932 5948.652 - 5973.858: 0.0674% ( 6) 00:07:52.932 5973.858 - 5999.065: 0.0843% ( 3) 00:07:52.932 5999.065 - 6024.271: 0.1124% ( 5) 00:07:52.932 6024.271 - 6049.477: 0.1349% ( 4) 00:07:52.932 6049.477 - 6074.683: 0.1686% ( 6) 00:07:52.932 6074.683 - 6099.889: 0.2361% ( 12) 00:07:52.932 6099.889 - 6125.095: 0.2810% ( 8) 00:07:52.932 6125.095 - 6150.302: 0.3260% ( 8) 00:07:52.932 6150.302 - 6175.508: 0.3878% ( 11) 00:07:52.932 6175.508 - 6200.714: 0.4890% ( 18) 00:07:52.932 6200.714 - 6225.920: 0.6183% ( 23) 00:07:52.932 6225.920 - 6251.126: 0.7925% ( 31) 00:07:52.932 6251.126 - 6276.332: 1.1691% ( 67) 00:07:52.932 6276.332 - 6301.538: 1.8716% ( 125) 00:07:52.932 6301.538 - 6326.745: 2.8721% ( 178) 00:07:52.932 6326.745 - 6351.951: 4.0861% ( 216) 00:07:52.932 6351.951 - 6377.157: 5.5980% ( 269) 00:07:52.932 6377.157 - 6402.363: 7.4753% ( 334) 00:07:52.932 6402.363 - 6427.569: 9.5717% ( 373) 00:07:52.932 6427.569 - 6452.775: 11.8817% ( 411) 00:07:52.932 6452.775 - 6503.188: 16.7716% ( 870) 00:07:52.932 6503.188 - 6553.600: 21.8019% ( 895) 00:07:52.932 6553.600 - 6604.012: 26.7030% ( 872) 00:07:52.932 6604.012 - 6654.425: 31.6434% ( 879) 00:07:52.932 6654.425 - 6704.837: 36.9661% ( 947) 00:07:52.932 6704.837 - 6755.249: 42.3336% ( 955) 00:07:52.932 6755.249 - 6805.662: 47.5607% ( 930) 00:07:52.932 6805.662 - 6856.074: 52.8946% ( 949) 00:07:52.932 6856.074 - 6906.486: 58.1048% ( 927) 00:07:52.932 6906.486 - 6956.898: 63.3599% ( 935) 00:07:52.932 6956.898 - 7007.311: 68.5308% ( 920) 00:07:52.932 7007.311 - 7057.723: 73.6848% ( 917) 00:07:52.932 7057.723 - 7108.135: 78.5128% ( 859) 00:07:52.932 7108.135 - 7158.548: 82.5652% ( 721) 00:07:52.932 7158.548 - 7208.960: 85.7408% ( 565) 00:07:52.932 7208.960 - 7259.372: 87.7923% ( 365) 00:07:52.932 7259.372 - 7309.785: 89.1693% ( 245) 00:07:52.932 7309.785 - 7360.197: 90.0967% ( 165) 00:07:52.932 7360.197 - 7410.609: 90.5744% ( 85) 00:07:52.932 7410.609 - 7461.022: 90.9679% ( 70) 00:07:52.932 7461.022 - 7511.434: 91.2995% ( 59) 00:07:52.932 7511.434 - 7561.846: 91.6030% ( 54) 00:07:52.932 7561.846 - 7612.258: 91.8053% ( 36) 00:07:52.932 7612.258 - 7662.671: 92.0020% ( 35) 00:07:52.933 7662.671 - 7713.083: 92.1369% ( 24) 00:07:52.933 7713.083 - 7763.495: 92.2549% ( 21) 00:07:52.933 7763.495 - 7813.908: 92.3674% ( 20) 00:07:52.933 7813.908 - 7864.320: 92.4798% ( 20) 00:07:52.933 7864.320 - 7914.732: 92.5472% ( 12) 00:07:52.933 7914.732 - 7965.145: 92.6259% ( 14) 00:07:52.933 7965.145 - 8015.557: 92.7046% ( 14) 00:07:52.933 8015.557 - 8065.969: 92.8001% ( 17) 00:07:52.933 8065.969 - 8116.382: 92.8844% ( 15) 00:07:52.933 8116.382 - 8166.794: 92.9575% ( 13) 00:07:52.933 8166.794 - 8217.206: 93.0137% ( 10) 00:07:52.933 8217.206 - 8267.618: 93.0643% ( 9) 00:07:52.933 8267.618 - 8318.031: 93.1149% ( 9) 00:07:52.933 8318.031 - 8368.443: 93.1767% ( 11) 00:07:52.933 8368.443 - 8418.855: 93.2273% ( 9) 00:07:52.933 8418.855 - 8469.268: 93.2835% ( 10) 00:07:52.933 8469.268 - 8519.680: 93.3397% ( 10) 00:07:52.933 8519.680 - 8570.092: 93.4015% ( 11) 00:07:52.933 8570.092 - 8620.505: 93.4577% ( 10) 00:07:52.933 8620.505 - 8670.917: 93.5477% ( 16) 00:07:52.933 8670.917 - 8721.329: 93.6263% ( 14) 00:07:52.933 8721.329 - 8771.742: 93.7163% ( 16) 00:07:52.933 8771.742 - 8822.154: 93.8006% ( 15) 00:07:52.933 8822.154 - 8872.566: 93.9018% ( 18) 00:07:52.933 8872.566 - 8922.978: 93.9973% ( 17) 00:07:52.933 8922.978 - 8973.391: 94.0816% ( 15) 00:07:52.933 8973.391 - 9023.803: 94.1715% ( 16) 00:07:52.933 9023.803 - 9074.215: 94.2615% ( 16) 00:07:52.933 9074.215 - 9124.628: 94.3458% ( 15) 00:07:52.933 9124.628 - 9175.040: 94.4469% ( 18) 00:07:52.933 9175.040 - 9225.452: 94.5312% ( 15) 00:07:52.933 9225.452 - 9275.865: 94.6099% ( 14) 00:07:52.933 9275.865 - 9326.277: 94.6942% ( 15) 00:07:52.933 9326.277 - 9376.689: 94.7561% ( 11) 00:07:52.933 9376.689 - 9427.102: 94.8291% ( 13) 00:07:52.933 9427.102 - 9477.514: 94.9247% ( 17) 00:07:52.933 9477.514 - 9527.926: 94.9865% ( 11) 00:07:52.933 9527.926 - 9578.338: 95.0708% ( 15) 00:07:52.933 9578.338 - 9628.751: 95.1383% ( 12) 00:07:52.933 9628.751 - 9679.163: 95.2057% ( 12) 00:07:52.933 9679.163 - 9729.575: 95.2675% ( 11) 00:07:52.933 9729.575 - 9779.988: 95.3181% ( 9) 00:07:52.933 9779.988 - 9830.400: 95.3687% ( 9) 00:07:52.933 9830.400 - 9880.812: 95.4024% ( 6) 00:07:52.933 9880.812 - 9931.225: 95.4586% ( 10) 00:07:52.933 9931.225 - 9981.637: 95.4924% ( 6) 00:07:52.933 9981.637 - 10032.049: 95.5261% ( 6) 00:07:52.933 10032.049 - 10082.462: 95.5429% ( 3) 00:07:52.933 10082.462 - 10132.874: 95.5710% ( 5) 00:07:52.933 10132.874 - 10183.286: 95.5879% ( 3) 00:07:52.933 10183.286 - 10233.698: 95.6160% ( 5) 00:07:52.933 10233.698 - 10284.111: 95.6385% ( 4) 00:07:52.933 10284.111 - 10334.523: 95.6610% ( 4) 00:07:52.933 10334.523 - 10384.935: 95.7003% ( 7) 00:07:52.933 10384.935 - 10435.348: 95.7228% ( 4) 00:07:52.933 10435.348 - 10485.760: 95.7959% ( 13) 00:07:52.933 10485.760 - 10536.172: 95.8183% ( 4) 00:07:52.933 10536.172 - 10586.585: 95.9251% ( 19) 00:07:52.933 10586.585 - 10636.997: 96.0375% ( 20) 00:07:52.933 10636.997 - 10687.409: 96.1050% ( 12) 00:07:52.933 10687.409 - 10737.822: 96.1668% ( 11) 00:07:52.933 10737.822 - 10788.234: 96.2343% ( 12) 00:07:52.933 10788.234 - 10838.646: 96.3242% ( 16) 00:07:52.933 10838.646 - 10889.058: 96.4197% ( 17) 00:07:52.933 10889.058 - 10939.471: 96.5209% ( 18) 00:07:52.933 10939.471 - 10989.883: 96.6108% ( 16) 00:07:52.933 10989.883 - 11040.295: 96.7064% ( 17) 00:07:52.933 11040.295 - 11090.708: 96.8019% ( 17) 00:07:52.933 11090.708 - 11141.120: 96.8862% ( 15) 00:07:52.933 11141.120 - 11191.532: 96.9649% ( 14) 00:07:52.933 11191.532 - 11241.945: 97.0549% ( 16) 00:07:52.933 11241.945 - 11292.357: 97.1392% ( 15) 00:07:52.933 11292.357 - 11342.769: 97.2066% ( 12) 00:07:52.933 11342.769 - 11393.182: 97.2741% ( 12) 00:07:52.933 11393.182 - 11443.594: 97.3303% ( 10) 00:07:52.933 11443.594 - 11494.006: 97.4033% ( 13) 00:07:52.933 11494.006 - 11544.418: 97.4595% ( 10) 00:07:52.933 11544.418 - 11594.831: 97.5157% ( 10) 00:07:52.933 11594.831 - 11645.243: 97.5607% ( 8) 00:07:52.933 11645.243 - 11695.655: 97.5888% ( 5) 00:07:52.933 11695.655 - 11746.068: 97.6057% ( 3) 00:07:52.933 11746.068 - 11796.480: 97.6225% ( 3) 00:07:52.933 11796.480 - 11846.892: 97.6450% ( 4) 00:07:52.933 11846.892 - 11897.305: 97.6619% ( 3) 00:07:52.933 11897.305 - 11947.717: 97.6844% ( 4) 00:07:52.933 11947.717 - 11998.129: 97.7518% ( 12) 00:07:52.933 11998.129 - 12048.542: 97.8417% ( 16) 00:07:52.933 12048.542 - 12098.954: 97.8923% ( 9) 00:07:52.933 12098.954 - 12149.366: 97.9654% ( 13) 00:07:52.933 12149.366 - 12199.778: 98.0272% ( 11) 00:07:52.933 12199.778 - 12250.191: 98.0946% ( 12) 00:07:52.933 12250.191 - 12300.603: 98.1677% ( 13) 00:07:52.933 12300.603 - 12351.015: 98.2352% ( 12) 00:07:52.933 12351.015 - 12401.428: 98.2970% ( 11) 00:07:52.933 12401.428 - 12451.840: 98.3420% ( 8) 00:07:52.933 12451.840 - 12502.252: 98.3925% ( 9) 00:07:52.933 12502.252 - 12552.665: 98.4431% ( 9) 00:07:52.933 12552.665 - 12603.077: 98.4881% ( 8) 00:07:52.933 12603.077 - 12653.489: 98.5443% ( 10) 00:07:52.933 12653.489 - 12703.902: 98.6117% ( 12) 00:07:52.933 12703.902 - 12754.314: 98.6848% ( 13) 00:07:52.933 12754.314 - 12804.726: 98.7354% ( 9) 00:07:52.933 12804.726 - 12855.138: 98.7972% ( 11) 00:07:52.933 12855.138 - 12905.551: 98.8422% ( 8) 00:07:52.933 12905.551 - 13006.375: 98.8984% ( 10) 00:07:52.933 13006.375 - 13107.200: 98.9602% ( 11) 00:07:52.933 13107.200 - 13208.025: 99.0164% ( 10) 00:07:52.933 13208.025 - 13308.849: 99.0726% ( 10) 00:07:52.933 13308.849 - 13409.674: 99.1344% ( 11) 00:07:52.933 13409.674 - 13510.498: 99.1794% ( 8) 00:07:52.933 13510.498 - 13611.323: 99.1963% ( 3) 00:07:52.933 13611.323 - 13712.148: 99.2075% ( 2) 00:07:52.933 13712.148 - 13812.972: 99.2244% ( 3) 00:07:52.933 13812.972 - 13913.797: 99.2412% ( 3) 00:07:52.933 13913.797 - 14014.622: 99.2525% ( 2) 00:07:52.933 14014.622 - 14115.446: 99.2693% ( 3) 00:07:52.933 14115.446 - 14216.271: 99.2806% ( 2) 00:07:52.933 22483.889 - 22584.714: 99.2918% ( 2) 00:07:52.933 22584.714 - 22685.538: 99.3143% ( 4) 00:07:52.933 22685.538 - 22786.363: 99.3368% ( 4) 00:07:52.933 22786.363 - 22887.188: 99.3593% ( 4) 00:07:52.933 22887.188 - 22988.012: 99.3817% ( 4) 00:07:52.933 22988.012 - 23088.837: 99.4042% ( 4) 00:07:52.933 23088.837 - 23189.662: 99.4323% ( 5) 00:07:52.933 23189.662 - 23290.486: 99.4548% ( 4) 00:07:52.933 23290.486 - 23391.311: 99.4773% ( 4) 00:07:52.933 23391.311 - 23492.135: 99.4998% ( 4) 00:07:52.933 23492.135 - 23592.960: 99.5223% ( 4) 00:07:52.933 23592.960 - 23693.785: 99.5447% ( 4) 00:07:52.933 23693.785 - 23794.609: 99.5672% ( 4) 00:07:52.933 23794.609 - 23895.434: 99.5953% ( 5) 00:07:52.933 23895.434 - 23996.258: 99.6178% ( 4) 00:07:52.933 23996.258 - 24097.083: 99.6403% ( 4) 00:07:52.933 27020.997 - 27222.646: 99.6684% ( 5) 00:07:52.933 27222.646 - 27424.295: 99.7190% ( 9) 00:07:52.933 27424.295 - 27625.945: 99.7639% ( 8) 00:07:52.933 27625.945 - 27827.594: 99.8089% ( 8) 00:07:52.933 27827.594 - 28029.243: 99.8595% ( 9) 00:07:52.933 28029.243 - 28230.892: 99.9045% ( 8) 00:07:52.933 28230.892 - 28432.542: 99.9550% ( 9) 00:07:52.933 28432.542 - 28634.191: 100.0000% ( 8) 00:07:52.933 00:07:52.933 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:52.933 ============================================================================== 00:07:52.933 Range in us Cumulative IO count 00:07:52.933 5948.652 - 5973.858: 0.0225% ( 4) 00:07:52.933 5973.858 - 5999.065: 0.0618% ( 7) 00:07:52.933 5999.065 - 6024.271: 0.1237% ( 11) 00:07:52.933 6024.271 - 6049.477: 0.1518% ( 5) 00:07:52.933 6049.477 - 6074.683: 0.1686% ( 3) 00:07:52.933 6074.683 - 6099.889: 0.2080% ( 7) 00:07:52.933 6099.889 - 6125.095: 0.3204% ( 20) 00:07:52.933 6125.095 - 6150.302: 0.3653% ( 8) 00:07:52.933 6150.302 - 6175.508: 0.4047% ( 7) 00:07:52.933 6175.508 - 6200.714: 0.4665% ( 11) 00:07:52.933 6200.714 - 6225.920: 0.5452% ( 14) 00:07:52.933 6225.920 - 6251.126: 0.7250% ( 32) 00:07:52.934 6251.126 - 6276.332: 1.1578% ( 77) 00:07:52.934 6276.332 - 6301.538: 1.9447% ( 140) 00:07:52.934 6301.538 - 6326.745: 2.8327% ( 158) 00:07:52.934 6326.745 - 6351.951: 3.9231% ( 194) 00:07:52.934 6351.951 - 6377.157: 5.5362% ( 287) 00:07:52.934 6377.157 - 6402.363: 7.2617% ( 307) 00:07:52.934 6402.363 - 6427.569: 9.4481% ( 389) 00:07:52.934 6427.569 - 6452.775: 11.9548% ( 446) 00:07:52.934 6452.775 - 6503.188: 16.9458% ( 888) 00:07:52.934 6503.188 - 6553.600: 21.7345% ( 852) 00:07:52.934 6553.600 - 6604.012: 26.6131% ( 868) 00:07:52.934 6604.012 - 6654.425: 31.8121% ( 925) 00:07:52.934 6654.425 - 6704.837: 37.0335% ( 929) 00:07:52.934 6704.837 - 6755.249: 42.3505% ( 946) 00:07:52.934 6755.249 - 6805.662: 47.5607% ( 927) 00:07:52.934 6805.662 - 6856.074: 52.7203% ( 918) 00:07:52.934 6856.074 - 6906.486: 57.8856% ( 919) 00:07:52.934 6906.486 - 6956.898: 63.0733% ( 923) 00:07:52.934 6956.898 - 7007.311: 68.3453% ( 938) 00:07:52.934 7007.311 - 7057.723: 73.5668% ( 929) 00:07:52.934 7057.723 - 7108.135: 78.3779% ( 856) 00:07:52.934 7108.135 - 7158.548: 82.4472% ( 724) 00:07:52.934 7158.548 - 7208.960: 85.5328% ( 549) 00:07:52.934 7208.960 - 7259.372: 87.5787% ( 364) 00:07:52.934 7259.372 - 7309.785: 88.8208% ( 221) 00:07:52.934 7309.785 - 7360.197: 89.6527% ( 148) 00:07:52.934 7360.197 - 7410.609: 90.2203% ( 101) 00:07:52.934 7410.609 - 7461.022: 90.6306% ( 73) 00:07:52.934 7461.022 - 7511.434: 90.9791% ( 62) 00:07:52.934 7511.434 - 7561.846: 91.2376% ( 46) 00:07:52.934 7561.846 - 7612.258: 91.4793% ( 43) 00:07:52.934 7612.258 - 7662.671: 91.6817% ( 36) 00:07:52.934 7662.671 - 7713.083: 91.8278% ( 26) 00:07:52.934 7713.083 - 7763.495: 91.9458% ( 21) 00:07:52.934 7763.495 - 7813.908: 92.0526% ( 19) 00:07:52.934 7813.908 - 7864.320: 92.1650% ( 20) 00:07:52.934 7864.320 - 7914.732: 92.2887% ( 22) 00:07:52.934 7914.732 - 7965.145: 92.4123% ( 22) 00:07:52.934 7965.145 - 8015.557: 92.5191% ( 19) 00:07:52.934 8015.557 - 8065.969: 92.6315% ( 20) 00:07:52.934 8065.969 - 8116.382: 92.7439% ( 20) 00:07:52.934 8116.382 - 8166.794: 92.8563% ( 20) 00:07:52.934 8166.794 - 8217.206: 92.9575% ( 18) 00:07:52.934 8217.206 - 8267.618: 93.0699% ( 20) 00:07:52.934 8267.618 - 8318.031: 93.1823% ( 20) 00:07:52.934 8318.031 - 8368.443: 93.3116% ( 23) 00:07:52.934 8368.443 - 8418.855: 93.4465% ( 24) 00:07:52.934 8418.855 - 8469.268: 93.5814% ( 24) 00:07:52.934 8469.268 - 8519.680: 93.7050% ( 22) 00:07:52.934 8519.680 - 8570.092: 93.8231% ( 21) 00:07:52.934 8570.092 - 8620.505: 93.9523% ( 23) 00:07:52.934 8620.505 - 8670.917: 94.0816% ( 23) 00:07:52.934 8670.917 - 8721.329: 94.1940% ( 20) 00:07:52.934 8721.329 - 8771.742: 94.3177% ( 22) 00:07:52.934 8771.742 - 8822.154: 94.4020% ( 15) 00:07:52.934 8822.154 - 8872.566: 94.4582% ( 10) 00:07:52.934 8872.566 - 8922.978: 94.5369% ( 14) 00:07:52.934 8922.978 - 8973.391: 94.6043% ( 12) 00:07:52.934 8973.391 - 9023.803: 94.6549% ( 9) 00:07:52.934 9023.803 - 9074.215: 94.7167% ( 11) 00:07:52.934 9074.215 - 9124.628: 94.7729% ( 10) 00:07:52.934 9124.628 - 9175.040: 94.8291% ( 10) 00:07:52.934 9175.040 - 9225.452: 94.8797% ( 9) 00:07:52.934 9225.452 - 9275.865: 94.9191% ( 7) 00:07:52.934 9275.865 - 9326.277: 94.9584% ( 7) 00:07:52.934 9326.277 - 9376.689: 94.9921% ( 6) 00:07:52.934 9376.689 - 9427.102: 95.0371% ( 8) 00:07:52.934 9427.102 - 9477.514: 95.0652% ( 5) 00:07:52.934 9477.514 - 9527.926: 95.1045% ( 7) 00:07:52.934 9527.926 - 9578.338: 95.1270% ( 4) 00:07:52.934 9578.338 - 9628.751: 95.1439% ( 3) 00:07:52.934 9628.751 - 9679.163: 95.1551% ( 2) 00:07:52.934 9679.163 - 9729.575: 95.1664% ( 2) 00:07:52.934 9729.575 - 9779.988: 95.1832% ( 3) 00:07:52.934 9779.988 - 9830.400: 95.1945% ( 2) 00:07:52.934 9830.400 - 9880.812: 95.2113% ( 3) 00:07:52.934 9880.812 - 9931.225: 95.2338% ( 4) 00:07:52.934 9931.225 - 9981.637: 95.2675% ( 6) 00:07:52.934 9981.637 - 10032.049: 95.2900% ( 4) 00:07:52.934 10032.049 - 10082.462: 95.3125% ( 4) 00:07:52.934 10082.462 - 10132.874: 95.3462% ( 6) 00:07:52.934 10132.874 - 10183.286: 95.3687% ( 4) 00:07:52.934 10183.286 - 10233.698: 95.3968% ( 5) 00:07:52.934 10233.698 - 10284.111: 95.4249% ( 5) 00:07:52.934 10284.111 - 10334.523: 95.4924% ( 12) 00:07:52.934 10334.523 - 10384.935: 95.5205% ( 5) 00:07:52.934 10384.935 - 10435.348: 95.5542% ( 6) 00:07:52.934 10435.348 - 10485.760: 95.5710% ( 3) 00:07:52.934 10485.760 - 10536.172: 95.5879% ( 3) 00:07:52.934 10536.172 - 10586.585: 95.6104% ( 4) 00:07:52.934 10586.585 - 10636.997: 95.6441% ( 6) 00:07:52.934 10636.997 - 10687.409: 95.6891% ( 8) 00:07:52.934 10687.409 - 10737.822: 95.7397% ( 9) 00:07:52.934 10737.822 - 10788.234: 95.7734% ( 6) 00:07:52.934 10788.234 - 10838.646: 95.8183% ( 8) 00:07:52.934 10838.646 - 10889.058: 95.8521% ( 6) 00:07:52.934 10889.058 - 10939.471: 95.8970% ( 8) 00:07:52.934 10939.471 - 10989.883: 95.9420% ( 8) 00:07:52.934 10989.883 - 11040.295: 96.0263% ( 15) 00:07:52.934 11040.295 - 11090.708: 96.1331% ( 19) 00:07:52.934 11090.708 - 11141.120: 96.3017% ( 30) 00:07:52.934 11141.120 - 11191.532: 96.4591% ( 28) 00:07:52.934 11191.532 - 11241.945: 96.5827% ( 22) 00:07:52.934 11241.945 - 11292.357: 96.6951% ( 20) 00:07:52.934 11292.357 - 11342.769: 96.8076% ( 20) 00:07:52.934 11342.769 - 11393.182: 96.9256% ( 21) 00:07:52.934 11393.182 - 11443.594: 97.0436% ( 21) 00:07:52.934 11443.594 - 11494.006: 97.1954% ( 27) 00:07:52.934 11494.006 - 11544.418: 97.3246% ( 23) 00:07:52.934 11544.418 - 11594.831: 97.4427% ( 21) 00:07:52.934 11594.831 - 11645.243: 97.5607% ( 21) 00:07:52.934 11645.243 - 11695.655: 97.6956% ( 24) 00:07:52.934 11695.655 - 11746.068: 97.8192% ( 22) 00:07:52.934 11746.068 - 11796.480: 97.9485% ( 23) 00:07:52.934 11796.480 - 11846.892: 98.0609% ( 20) 00:07:52.934 11846.892 - 11897.305: 98.1340% ( 13) 00:07:52.934 11897.305 - 11947.717: 98.2183% ( 15) 00:07:52.934 11947.717 - 11998.129: 98.2970% ( 14) 00:07:52.934 11998.129 - 12048.542: 98.3588% ( 11) 00:07:52.934 12048.542 - 12098.954: 98.4094% ( 9) 00:07:52.934 12098.954 - 12149.366: 98.4600% ( 9) 00:07:52.934 12149.366 - 12199.778: 98.5106% ( 9) 00:07:52.934 12199.778 - 12250.191: 98.5668% ( 10) 00:07:52.934 12250.191 - 12300.603: 98.6174% ( 9) 00:07:52.934 12300.603 - 12351.015: 98.6567% ( 7) 00:07:52.934 12351.015 - 12401.428: 98.6848% ( 5) 00:07:52.934 12401.428 - 12451.840: 98.7073% ( 4) 00:07:52.934 12451.840 - 12502.252: 98.7354% ( 5) 00:07:52.934 12502.252 - 12552.665: 98.7579% ( 4) 00:07:52.934 12552.665 - 12603.077: 98.7804% ( 4) 00:07:52.934 12603.077 - 12653.489: 98.8085% ( 5) 00:07:52.934 12653.489 - 12703.902: 98.8253% ( 3) 00:07:52.934 12703.902 - 12754.314: 98.8422% ( 3) 00:07:52.934 12754.314 - 12804.726: 98.8590% ( 3) 00:07:52.934 12804.726 - 12855.138: 98.8703% ( 2) 00:07:52.934 12855.138 - 12905.551: 98.8928% ( 4) 00:07:52.934 12905.551 - 13006.375: 98.9209% ( 5) 00:07:52.934 13006.375 - 13107.200: 98.9433% ( 4) 00:07:52.934 13107.200 - 13208.025: 98.9771% ( 6) 00:07:52.934 13208.025 - 13308.849: 99.0052% ( 5) 00:07:52.934 13308.849 - 13409.674: 99.0333% ( 5) 00:07:52.934 13409.674 - 13510.498: 99.0670% ( 6) 00:07:52.934 13510.498 - 13611.323: 99.1007% ( 6) 00:07:52.934 13611.323 - 13712.148: 99.1288% ( 5) 00:07:52.934 13712.148 - 13812.972: 99.1625% ( 6) 00:07:52.934 13812.972 - 13913.797: 99.1850% ( 4) 00:07:52.934 13913.797 - 14014.622: 99.2188% ( 6) 00:07:52.934 14014.622 - 14115.446: 99.2469% ( 5) 00:07:52.934 14115.446 - 14216.271: 99.2693% ( 4) 00:07:52.934 14216.271 - 14317.095: 99.2806% ( 2) 00:07:52.934 20769.871 - 20870.695: 99.2974% ( 3) 00:07:52.934 20870.695 - 20971.520: 99.3199% ( 4) 00:07:52.934 20971.520 - 21072.345: 99.3480% ( 5) 00:07:52.934 21072.345 - 21173.169: 99.3705% ( 4) 00:07:52.934 21173.169 - 21273.994: 99.3930% ( 4) 00:07:52.934 21273.994 - 21374.818: 99.4155% ( 4) 00:07:52.934 21374.818 - 21475.643: 99.4379% ( 4) 00:07:52.934 21475.643 - 21576.468: 99.4604% ( 4) 00:07:52.934 21576.468 - 21677.292: 99.4829% ( 4) 00:07:52.934 21677.292 - 21778.117: 99.5054% ( 4) 00:07:52.934 21778.117 - 21878.942: 99.5279% ( 4) 00:07:52.934 21878.942 - 21979.766: 99.5560% ( 5) 00:07:52.934 21979.766 - 22080.591: 99.5785% ( 4) 00:07:52.934 22080.591 - 22181.415: 99.6009% ( 4) 00:07:52.934 22181.415 - 22282.240: 99.6290% ( 5) 00:07:52.934 22282.240 - 22383.065: 99.6403% ( 2) 00:07:52.934 25306.978 - 25407.803: 99.6571% ( 3) 00:07:52.934 25407.803 - 25508.628: 99.6796% ( 4) 00:07:52.934 25508.628 - 25609.452: 99.7077% ( 5) 00:07:52.934 25609.452 - 25710.277: 99.7302% ( 4) 00:07:52.934 25710.277 - 25811.102: 99.7471% ( 3) 00:07:52.934 25811.102 - 26012.751: 99.7920% ( 8) 00:07:52.934 26012.751 - 26214.400: 99.8426% ( 9) 00:07:52.934 26214.400 - 26416.049: 99.8876% ( 8) 00:07:52.934 26416.049 - 26617.698: 99.9382% ( 9) 00:07:52.934 26617.698 - 26819.348: 99.9831% ( 8) 00:07:52.934 26819.348 - 27020.997: 100.0000% ( 3) 00:07:52.934 00:07:52.934 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:52.934 ============================================================================== 00:07:52.934 Range in us Cumulative IO count 00:07:52.934 5873.034 - 5898.240: 0.0112% ( 2) 00:07:52.934 5898.240 - 5923.446: 0.0224% ( 2) 00:07:52.934 5923.446 - 5948.652: 0.0280% ( 1) 00:07:52.934 5948.652 - 5973.858: 0.0616% ( 6) 00:07:52.934 5973.858 - 5999.065: 0.0784% ( 3) 00:07:52.934 5999.065 - 6024.271: 0.1008% ( 4) 00:07:52.934 6024.271 - 6049.477: 0.1232% ( 4) 00:07:52.934 6049.477 - 6074.683: 0.1456% ( 4) 00:07:52.934 6074.683 - 6099.889: 0.1792% ( 6) 00:07:52.934 6099.889 - 6125.095: 0.2520% ( 13) 00:07:52.934 6125.095 - 6150.302: 0.3360% ( 15) 00:07:52.935 6150.302 - 6175.508: 0.3696% ( 6) 00:07:52.935 6175.508 - 6200.714: 0.4256% ( 10) 00:07:52.935 6200.714 - 6225.920: 0.5376% ( 20) 00:07:52.935 6225.920 - 6251.126: 0.7112% ( 31) 00:07:52.935 6251.126 - 6276.332: 1.2265% ( 92) 00:07:52.935 6276.332 - 6301.538: 1.9713% ( 133) 00:07:52.935 6301.538 - 6326.745: 2.9794% ( 180) 00:07:52.935 6326.745 - 6351.951: 4.1723% ( 213) 00:07:52.935 6351.951 - 6377.157: 5.5388% ( 244) 00:07:52.935 6377.157 - 6402.363: 7.1573% ( 289) 00:07:52.935 6402.363 - 6427.569: 9.6438% ( 444) 00:07:52.935 6427.569 - 6452.775: 12.1136% ( 441) 00:07:52.935 6452.775 - 6503.188: 16.7675% ( 831) 00:07:52.935 6503.188 - 6553.600: 21.8078% ( 900) 00:07:52.935 6553.600 - 6604.012: 26.6801% ( 870) 00:07:52.935 6604.012 - 6654.425: 31.7876% ( 912) 00:07:52.935 6654.425 - 6704.837: 37.0576% ( 941) 00:07:52.935 6704.837 - 6755.249: 42.0363% ( 889) 00:07:52.935 6755.249 - 6805.662: 47.2054% ( 923) 00:07:52.935 6805.662 - 6856.074: 52.4026% ( 928) 00:07:52.935 6856.074 - 6906.486: 57.6333% ( 934) 00:07:52.935 6906.486 - 6956.898: 62.7856% ( 920) 00:07:52.935 6956.898 - 7007.311: 67.9828% ( 928) 00:07:52.935 7007.311 - 7057.723: 73.1855% ( 929) 00:07:52.935 7057.723 - 7108.135: 78.0018% ( 860) 00:07:52.935 7108.135 - 7158.548: 82.0789% ( 728) 00:07:52.935 7158.548 - 7208.960: 85.2711% ( 570) 00:07:52.935 7208.960 - 7259.372: 87.3768% ( 376) 00:07:52.935 7259.372 - 7309.785: 88.6761% ( 232) 00:07:52.935 7309.785 - 7360.197: 89.5161% ( 150) 00:07:52.935 7360.197 - 7410.609: 90.0986% ( 104) 00:07:52.935 7410.609 - 7461.022: 90.5410% ( 79) 00:07:52.935 7461.022 - 7511.434: 90.8490% ( 55) 00:07:52.935 7511.434 - 7561.846: 91.0898% ( 43) 00:07:52.935 7561.846 - 7612.258: 91.3138% ( 40) 00:07:52.935 7612.258 - 7662.671: 91.4819% ( 30) 00:07:52.935 7662.671 - 7713.083: 91.6275% ( 26) 00:07:52.935 7713.083 - 7763.495: 91.7451% ( 21) 00:07:52.935 7763.495 - 7813.908: 91.8627% ( 21) 00:07:52.935 7813.908 - 7864.320: 92.0027% ( 25) 00:07:52.935 7864.320 - 7914.732: 92.1259% ( 22) 00:07:52.935 7914.732 - 7965.145: 92.2379% ( 20) 00:07:52.935 7965.145 - 8015.557: 92.3779% ( 25) 00:07:52.935 8015.557 - 8065.969: 92.5459% ( 30) 00:07:52.935 8065.969 - 8116.382: 92.7251% ( 32) 00:07:52.935 8116.382 - 8166.794: 92.8707% ( 26) 00:07:52.935 8166.794 - 8217.206: 92.9996% ( 23) 00:07:52.935 8217.206 - 8267.618: 93.1284% ( 23) 00:07:52.935 8267.618 - 8318.031: 93.2292% ( 18) 00:07:52.935 8318.031 - 8368.443: 93.3244% ( 17) 00:07:52.935 8368.443 - 8418.855: 93.4196% ( 17) 00:07:52.935 8418.855 - 8469.268: 93.5876% ( 30) 00:07:52.935 8469.268 - 8519.680: 93.7388% ( 27) 00:07:52.935 8519.680 - 8570.092: 93.8396% ( 18) 00:07:52.935 8570.092 - 8620.505: 93.9628% ( 22) 00:07:52.935 8620.505 - 8670.917: 94.0860% ( 22) 00:07:52.935 8670.917 - 8721.329: 94.2316% ( 26) 00:07:52.935 8721.329 - 8771.742: 94.3436% ( 20) 00:07:52.935 8771.742 - 8822.154: 94.4444% ( 18) 00:07:52.935 8822.154 - 8872.566: 94.5284% ( 15) 00:07:52.935 8872.566 - 8922.978: 94.6125% ( 15) 00:07:52.935 8922.978 - 8973.391: 94.6853% ( 13) 00:07:52.935 8973.391 - 9023.803: 94.7637% ( 14) 00:07:52.935 9023.803 - 9074.215: 94.8365% ( 13) 00:07:52.935 9074.215 - 9124.628: 94.8981% ( 11) 00:07:52.935 9124.628 - 9175.040: 94.9485% ( 9) 00:07:52.935 9175.040 - 9225.452: 94.9709% ( 4) 00:07:52.935 9225.452 - 9275.865: 94.9821% ( 2) 00:07:52.935 9427.102 - 9477.514: 94.9989% ( 3) 00:07:52.935 9477.514 - 9527.926: 95.0101% ( 2) 00:07:52.935 9527.926 - 9578.338: 95.0213% ( 2) 00:07:52.935 9578.338 - 9628.751: 95.0325% ( 2) 00:07:52.935 9628.751 - 9679.163: 95.0493% ( 3) 00:07:52.935 9679.163 - 9729.575: 95.0605% ( 2) 00:07:52.935 9729.575 - 9779.988: 95.0829% ( 4) 00:07:52.935 9779.988 - 9830.400: 95.1109% ( 5) 00:07:52.935 9830.400 - 9880.812: 95.1333% ( 4) 00:07:52.935 9880.812 - 9931.225: 95.1613% ( 5) 00:07:52.935 9931.225 - 9981.637: 95.1893% ( 5) 00:07:52.935 9981.637 - 10032.049: 95.2117% ( 4) 00:07:52.935 10032.049 - 10082.462: 95.2341% ( 4) 00:07:52.935 10082.462 - 10132.874: 95.2621% ( 5) 00:07:52.935 10132.874 - 10183.286: 95.2901% ( 5) 00:07:52.935 10183.286 - 10233.698: 95.3125% ( 4) 00:07:52.935 10233.698 - 10284.111: 95.3405% ( 5) 00:07:52.935 10284.111 - 10334.523: 95.3797% ( 7) 00:07:52.935 10334.523 - 10384.935: 95.4189% ( 7) 00:07:52.935 10384.935 - 10435.348: 95.4861% ( 12) 00:07:52.935 10435.348 - 10485.760: 95.5589% ( 13) 00:07:52.935 10485.760 - 10536.172: 95.6429% ( 15) 00:07:52.935 10536.172 - 10586.585: 95.7325% ( 16) 00:07:52.935 10586.585 - 10636.997: 95.7941% ( 11) 00:07:52.935 10636.997 - 10687.409: 95.8893% ( 17) 00:07:52.935 10687.409 - 10737.822: 95.9733% ( 15) 00:07:52.935 10737.822 - 10788.234: 96.0573% ( 15) 00:07:52.935 10788.234 - 10838.646: 96.1358% ( 14) 00:07:52.935 10838.646 - 10889.058: 96.2198% ( 15) 00:07:52.935 10889.058 - 10939.471: 96.3038% ( 15) 00:07:52.935 10939.471 - 10989.883: 96.4046% ( 18) 00:07:52.935 10989.883 - 11040.295: 96.4886% ( 15) 00:07:52.935 11040.295 - 11090.708: 96.5782% ( 16) 00:07:52.935 11090.708 - 11141.120: 96.6734% ( 17) 00:07:52.935 11141.120 - 11191.532: 96.7518% ( 14) 00:07:52.935 11191.532 - 11241.945: 96.8302% ( 14) 00:07:52.935 11241.945 - 11292.357: 96.9142% ( 15) 00:07:52.935 11292.357 - 11342.769: 96.9870% ( 13) 00:07:52.935 11342.769 - 11393.182: 97.0654% ( 14) 00:07:52.935 11393.182 - 11443.594: 97.1270% ( 11) 00:07:52.935 11443.594 - 11494.006: 97.1998% ( 13) 00:07:52.935 11494.006 - 11544.418: 97.2670% ( 12) 00:07:52.935 11544.418 - 11594.831: 97.4182% ( 27) 00:07:52.935 11594.831 - 11645.243: 97.5022% ( 15) 00:07:52.935 11645.243 - 11695.655: 97.5862% ( 15) 00:07:52.935 11695.655 - 11746.068: 97.6759% ( 16) 00:07:52.935 11746.068 - 11796.480: 97.7655% ( 16) 00:07:52.935 11796.480 - 11846.892: 97.8495% ( 15) 00:07:52.935 11846.892 - 11897.305: 97.9391% ( 16) 00:07:52.935 11897.305 - 11947.717: 98.0231% ( 15) 00:07:52.935 11947.717 - 11998.129: 98.0847% ( 11) 00:07:52.935 11998.129 - 12048.542: 98.1631% ( 14) 00:07:52.935 12048.542 - 12098.954: 98.2415% ( 14) 00:07:52.935 12098.954 - 12149.366: 98.2919% ( 9) 00:07:52.935 12149.366 - 12199.778: 98.3535% ( 11) 00:07:52.935 12199.778 - 12250.191: 98.4095% ( 10) 00:07:52.935 12250.191 - 12300.603: 98.4767% ( 12) 00:07:52.935 12300.603 - 12351.015: 98.5439% ( 12) 00:07:52.935 12351.015 - 12401.428: 98.5887% ( 8) 00:07:52.935 12401.428 - 12451.840: 98.6223% ( 6) 00:07:52.935 12451.840 - 12502.252: 98.6671% ( 8) 00:07:52.935 12502.252 - 12552.665: 98.6727% ( 1) 00:07:52.935 12552.665 - 12603.077: 98.6951% ( 4) 00:07:52.935 12603.077 - 12653.489: 98.7231% ( 5) 00:07:52.935 12653.489 - 12703.902: 98.7511% ( 5) 00:07:52.935 12703.902 - 12754.314: 98.7847% ( 6) 00:07:52.935 12754.314 - 12804.726: 98.8127% ( 5) 00:07:52.935 12804.726 - 12855.138: 98.8407% ( 5) 00:07:52.935 12855.138 - 12905.551: 98.8687% ( 5) 00:07:52.935 12905.551 - 13006.375: 98.9191% ( 9) 00:07:52.935 13006.375 - 13107.200: 98.9751% ( 10) 00:07:52.935 13107.200 - 13208.025: 99.0311% ( 10) 00:07:52.935 13208.025 - 13308.849: 99.0815% ( 9) 00:07:52.935 13308.849 - 13409.674: 99.1095% ( 5) 00:07:52.935 13409.674 - 13510.498: 99.1431% ( 6) 00:07:52.935 13510.498 - 13611.323: 99.1823% ( 7) 00:07:52.935 13611.323 - 13712.148: 99.2047% ( 4) 00:07:52.935 13712.148 - 13812.972: 99.2216% ( 3) 00:07:52.935 13812.972 - 13913.797: 99.2384% ( 3) 00:07:52.935 13913.797 - 14014.622: 99.2496% ( 2) 00:07:52.935 14014.622 - 14115.446: 99.2664% ( 3) 00:07:52.935 14115.446 - 14216.271: 99.2832% ( 3) 00:07:52.935 15325.342 - 15426.166: 99.2888% ( 1) 00:07:52.935 15426.166 - 15526.991: 99.3112% ( 4) 00:07:52.935 15526.991 - 15627.815: 99.3336% ( 4) 00:07:52.935 15627.815 - 15728.640: 99.3616% ( 5) 00:07:52.935 15728.640 - 15829.465: 99.3840% ( 4) 00:07:52.935 15829.465 - 15930.289: 99.4064% ( 4) 00:07:52.935 15930.289 - 16031.114: 99.4288% ( 4) 00:07:52.935 16031.114 - 16131.938: 99.4568% ( 5) 00:07:52.935 16131.938 - 16232.763: 99.4792% ( 4) 00:07:52.935 16232.763 - 16333.588: 99.5016% ( 4) 00:07:52.935 16333.588 - 16434.412: 99.5240% ( 4) 00:07:52.935 16434.412 - 16535.237: 99.5464% ( 4) 00:07:52.935 16535.237 - 16636.062: 99.5688% ( 4) 00:07:52.935 16636.062 - 16736.886: 99.5912% ( 4) 00:07:52.935 16736.886 - 16837.711: 99.6192% ( 5) 00:07:52.935 16837.711 - 16938.535: 99.6416% ( 4) 00:07:52.935 20064.098 - 20164.923: 99.6472% ( 1) 00:07:52.935 20164.923 - 20265.748: 99.6696% ( 4) 00:07:52.935 20265.748 - 20366.572: 99.6920% ( 4) 00:07:52.935 20366.572 - 20467.397: 99.7144% ( 4) 00:07:52.935 20467.397 - 20568.222: 99.7424% ( 5) 00:07:52.935 20568.222 - 20669.046: 99.7648% ( 4) 00:07:52.935 20669.046 - 20769.871: 99.7816% ( 3) 00:07:52.935 20769.871 - 20870.695: 99.8040% ( 4) 00:07:52.935 20870.695 - 20971.520: 99.8320% ( 5) 00:07:52.935 20971.520 - 21072.345: 99.8544% ( 4) 00:07:52.935 21072.345 - 21173.169: 99.8768% ( 4) 00:07:52.935 21173.169 - 21273.994: 99.8992% ( 4) 00:07:52.935 21273.994 - 21374.818: 99.9216% ( 4) 00:07:52.935 21374.818 - 21475.643: 99.9440% ( 4) 00:07:52.935 21475.643 - 21576.468: 99.9720% ( 5) 00:07:52.935 21576.468 - 21677.292: 99.9944% ( 4) 00:07:52.935 21677.292 - 21778.117: 100.0000% ( 1) 00:07:52.935 00:07:52.935 19:09:37 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:54.317 Initializing NVMe Controllers 00:07:54.317 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:54.317 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:54.317 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:54.317 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:54.317 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:54.317 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:54.317 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:54.317 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:54.317 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:54.317 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:54.317 Initialization complete. Launching workers. 00:07:54.317 ======================================================== 00:07:54.317 Latency(us) 00:07:54.318 Device Information : IOPS MiB/s Average min max 00:07:54.318 PCIE (0000:00:10.0) NSID 1 from core 0: 17058.27 199.90 7515.07 4966.36 31593.57 00:07:54.318 PCIE (0000:00:11.0) NSID 1 from core 0: 17058.27 199.90 7503.77 5089.17 29667.36 00:07:54.318 PCIE (0000:00:13.0) NSID 1 from core 0: 17058.27 199.90 7492.31 5117.48 28371.16 00:07:54.318 PCIE (0000:00:12.0) NSID 1 from core 0: 17058.27 199.90 7480.99 5083.51 26509.32 00:07:54.318 PCIE (0000:00:12.0) NSID 2 from core 0: 17058.27 199.90 7469.60 5075.54 24709.08 00:07:54.318 PCIE (0000:00:12.0) NSID 3 from core 0: 17122.16 200.65 7430.61 5080.43 19450.51 00:07:54.318 ======================================================== 00:07:54.318 Total : 102413.49 1200.16 7482.03 4966.36 31593.57 00:07:54.318 00:07:54.318 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:54.318 ================================================================================= 00:07:54.318 1.00000% : 5394.117us 00:07:54.318 10.00000% : 6200.714us 00:07:54.318 25.00000% : 6553.600us 00:07:54.318 50.00000% : 6906.486us 00:07:54.318 75.00000% : 7662.671us 00:07:54.318 90.00000% : 9275.865us 00:07:54.318 95.00000% : 11393.182us 00:07:54.318 98.00000% : 13812.972us 00:07:54.318 99.00000% : 14922.043us 00:07:54.318 99.50000% : 26214.400us 00:07:54.318 99.90000% : 31255.631us 00:07:54.318 99.99000% : 31658.929us 00:07:54.318 99.99900% : 31658.929us 00:07:54.318 99.99990% : 31658.929us 00:07:54.318 99.99999% : 31658.929us 00:07:54.318 00:07:54.318 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:54.318 ================================================================================= 00:07:54.318 1.00000% : 5469.735us 00:07:54.318 10.00000% : 6251.126us 00:07:54.318 25.00000% : 6604.012us 00:07:54.318 50.00000% : 6856.074us 00:07:54.318 75.00000% : 7662.671us 00:07:54.318 90.00000% : 9225.452us 00:07:54.318 95.00000% : 11544.418us 00:07:54.318 98.00000% : 13812.972us 00:07:54.318 99.00000% : 15627.815us 00:07:54.318 99.50000% : 24399.557us 00:07:54.318 99.90000% : 29440.788us 00:07:54.318 99.99000% : 29844.086us 00:07:54.318 99.99900% : 29844.086us 00:07:54.318 99.99990% : 29844.086us 00:07:54.318 99.99999% : 29844.086us 00:07:54.318 00:07:54.318 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:54.318 ================================================================================= 00:07:54.318 1.00000% : 5469.735us 00:07:54.318 10.00000% : 6251.126us 00:07:54.318 25.00000% : 6553.600us 00:07:54.318 50.00000% : 6856.074us 00:07:54.318 75.00000% : 7662.671us 00:07:54.318 90.00000% : 8973.391us 00:07:54.318 95.00000% : 11494.006us 00:07:54.318 98.00000% : 14014.622us 00:07:54.318 99.00000% : 15224.517us 00:07:54.318 99.50000% : 23189.662us 00:07:54.318 99.90000% : 28029.243us 00:07:54.318 99.99000% : 28432.542us 00:07:54.318 99.99900% : 28432.542us 00:07:54.318 99.99990% : 28432.542us 00:07:54.318 99.99999% : 28432.542us 00:07:54.318 00:07:54.318 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:54.318 ================================================================================= 00:07:54.318 1.00000% : 5444.529us 00:07:54.318 10.00000% : 6251.126us 00:07:54.318 25.00000% : 6604.012us 00:07:54.318 50.00000% : 6856.074us 00:07:54.318 75.00000% : 7713.083us 00:07:54.318 90.00000% : 8922.978us 00:07:54.318 95.00000% : 11544.418us 00:07:54.318 98.00000% : 13611.323us 00:07:54.318 99.00000% : 15224.517us 00:07:54.318 99.50000% : 21475.643us 00:07:54.318 99.90000% : 26214.400us 00:07:54.318 99.99000% : 26617.698us 00:07:54.318 99.99900% : 26617.698us 00:07:54.318 99.99990% : 26617.698us 00:07:54.318 99.99999% : 26617.698us 00:07:54.318 00:07:54.318 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:54.318 ================================================================================= 00:07:54.318 1.00000% : 5444.529us 00:07:54.318 10.00000% : 6251.126us 00:07:54.318 25.00000% : 6604.012us 00:07:54.318 50.00000% : 6856.074us 00:07:54.318 75.00000% : 7612.258us 00:07:54.318 90.00000% : 9124.628us 00:07:54.318 95.00000% : 11594.831us 00:07:54.318 98.00000% : 13712.148us 00:07:54.318 99.00000% : 15224.517us 00:07:54.318 99.50000% : 19559.975us 00:07:54.318 99.90000% : 24399.557us 00:07:54.318 99.99000% : 24702.031us 00:07:54.318 99.99900% : 24802.855us 00:07:54.318 99.99990% : 24802.855us 00:07:54.318 99.99999% : 24802.855us 00:07:54.318 00:07:54.318 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:54.318 ================================================================================= 00:07:54.318 1.00000% : 5469.735us 00:07:54.318 10.00000% : 6251.126us 00:07:54.318 25.00000% : 6604.012us 00:07:54.318 50.00000% : 6906.486us 00:07:54.318 75.00000% : 7612.258us 00:07:54.318 90.00000% : 9326.277us 00:07:54.318 95.00000% : 11594.831us 00:07:54.318 98.00000% : 13712.148us 00:07:54.318 99.00000% : 14518.745us 00:07:54.318 99.50000% : 15123.692us 00:07:54.318 99.90000% : 19055.852us 00:07:54.318 99.99000% : 19459.151us 00:07:54.318 99.99900% : 19459.151us 00:07:54.318 99.99990% : 19459.151us 00:07:54.318 99.99999% : 19459.151us 00:07:54.318 00:07:54.318 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:54.318 ============================================================================== 00:07:54.318 Range in us Cumulative IO count 00:07:54.318 4965.612 - 4990.818: 0.0117% ( 2) 00:07:54.318 4990.818 - 5016.025: 0.0293% ( 3) 00:07:54.318 5016.025 - 5041.231: 0.0527% ( 4) 00:07:54.318 5041.231 - 5066.437: 0.0819% ( 5) 00:07:54.318 5066.437 - 5091.643: 0.1053% ( 4) 00:07:54.318 5091.643 - 5116.849: 0.1463% ( 7) 00:07:54.318 5116.849 - 5142.055: 0.1697% ( 4) 00:07:54.318 5142.055 - 5167.262: 0.2107% ( 7) 00:07:54.318 5167.262 - 5192.468: 0.2575% ( 8) 00:07:54.318 5192.468 - 5217.674: 0.3453% ( 15) 00:07:54.318 5217.674 - 5242.880: 0.4565% ( 19) 00:07:54.318 5242.880 - 5268.086: 0.5618% ( 18) 00:07:54.318 5268.086 - 5293.292: 0.6437% ( 14) 00:07:54.318 5293.292 - 5318.498: 0.7491% ( 18) 00:07:54.318 5318.498 - 5343.705: 0.8720% ( 21) 00:07:54.318 5343.705 - 5368.911: 0.9831% ( 19) 00:07:54.318 5368.911 - 5394.117: 1.0417% ( 10) 00:07:54.318 5394.117 - 5419.323: 1.1353% ( 16) 00:07:54.318 5419.323 - 5444.529: 1.2289% ( 16) 00:07:54.318 5444.529 - 5469.735: 1.3109% ( 14) 00:07:54.318 5469.735 - 5494.942: 1.5274% ( 37) 00:07:54.318 5494.942 - 5520.148: 1.7088% ( 31) 00:07:54.318 5520.148 - 5545.354: 2.0073% ( 51) 00:07:54.318 5545.354 - 5570.560: 2.2999% ( 50) 00:07:54.318 5570.560 - 5595.766: 2.5515% ( 43) 00:07:54.318 5595.766 - 5620.972: 2.9085% ( 61) 00:07:54.318 5620.972 - 5646.178: 3.1952% ( 49) 00:07:54.318 5646.178 - 5671.385: 3.4527% ( 44) 00:07:54.318 5671.385 - 5696.591: 3.6575% ( 35) 00:07:54.318 5696.591 - 5721.797: 3.9092% ( 43) 00:07:54.318 5721.797 - 5747.003: 4.2018% ( 50) 00:07:54.318 5747.003 - 5772.209: 4.4476% ( 42) 00:07:54.318 5772.209 - 5797.415: 4.6875% ( 41) 00:07:54.318 5797.415 - 5822.622: 4.9333% ( 42) 00:07:54.318 5822.622 - 5847.828: 5.1440% ( 36) 00:07:54.318 5847.828 - 5873.034: 5.4249% ( 48) 00:07:54.318 5873.034 - 5898.240: 5.7116% ( 49) 00:07:54.318 5898.240 - 5923.446: 5.9808% ( 46) 00:07:54.318 5923.446 - 5948.652: 6.1564% ( 30) 00:07:54.318 5948.652 - 5973.858: 6.4431% ( 49) 00:07:54.318 5973.858 - 5999.065: 6.7182% ( 47) 00:07:54.318 5999.065 - 6024.271: 7.0868% ( 63) 00:07:54.318 6024.271 - 6049.477: 7.3794% ( 50) 00:07:54.318 6049.477 - 6074.683: 7.6369% ( 44) 00:07:54.318 6074.683 - 6099.889: 7.9705% ( 57) 00:07:54.318 6099.889 - 6125.095: 8.3509% ( 65) 00:07:54.318 6125.095 - 6150.302: 8.8074% ( 78) 00:07:54.318 6150.302 - 6175.508: 9.3750% ( 97) 00:07:54.318 6175.508 - 6200.714: 10.0304% ( 112) 00:07:54.318 6200.714 - 6225.920: 10.7327% ( 120) 00:07:54.318 6225.920 - 6251.126: 11.5520% ( 140) 00:07:54.318 6251.126 - 6276.332: 12.2718% ( 123) 00:07:54.318 6276.332 - 6301.538: 12.9272% ( 112) 00:07:54.318 6301.538 - 6326.745: 13.7757% ( 145) 00:07:54.318 6326.745 - 6351.951: 14.8057% ( 176) 00:07:54.318 6351.951 - 6377.157: 15.8825% ( 184) 00:07:54.318 6377.157 - 6402.363: 16.9125% ( 176) 00:07:54.318 6402.363 - 6427.569: 18.2526% ( 229) 00:07:54.318 6427.569 - 6452.775: 19.7507% ( 256) 00:07:54.318 6452.775 - 6503.188: 22.9518% ( 547) 00:07:54.318 6503.188 - 6553.600: 26.5274% ( 611) 00:07:54.318 6553.600 - 6604.012: 30.8228% ( 734) 00:07:54.318 6604.012 - 6654.425: 35.2060% ( 749) 00:07:54.318 6654.425 - 6704.837: 39.2673% ( 694) 00:07:54.318 6704.837 - 6755.249: 42.9658% ( 632) 00:07:54.318 6755.249 - 6805.662: 46.5648% ( 615) 00:07:54.318 6805.662 - 6856.074: 49.8537% ( 562) 00:07:54.318 6856.074 - 6906.486: 52.4286% ( 440) 00:07:54.318 6906.486 - 6956.898: 54.8221% ( 409) 00:07:54.318 6956.898 - 7007.311: 57.2390% ( 413) 00:07:54.318 7007.311 - 7057.723: 59.9719% ( 467) 00:07:54.318 7057.723 - 7108.135: 62.2952% ( 397) 00:07:54.318 7108.135 - 7158.548: 64.2147% ( 328) 00:07:54.318 7158.548 - 7208.960: 65.9527% ( 297) 00:07:54.318 7208.960 - 7259.372: 67.4040% ( 248) 00:07:54.318 7259.372 - 7309.785: 68.5978% ( 204) 00:07:54.318 7309.785 - 7360.197: 69.6337% ( 177) 00:07:54.318 7360.197 - 7410.609: 70.6519% ( 174) 00:07:54.318 7410.609 - 7461.022: 71.8809% ( 210) 00:07:54.318 7461.022 - 7511.434: 72.9108% ( 176) 00:07:54.318 7511.434 - 7561.846: 73.8179% ( 155) 00:07:54.318 7561.846 - 7612.258: 74.5962% ( 133) 00:07:54.318 7612.258 - 7662.671: 75.3687% ( 132) 00:07:54.318 7662.671 - 7713.083: 76.1587% ( 135) 00:07:54.318 7713.083 - 7763.495: 76.9019% ( 127) 00:07:54.318 7763.495 - 7813.908: 77.4520% ( 94) 00:07:54.318 7813.908 - 7864.320: 77.9670% ( 88) 00:07:54.318 7864.320 - 7914.732: 78.4352% ( 80) 00:07:54.319 7914.732 - 7965.145: 79.0145% ( 99) 00:07:54.319 7965.145 - 8015.557: 79.5646% ( 94) 00:07:54.319 8015.557 - 8065.969: 80.2200% ( 112) 00:07:54.319 8065.969 - 8116.382: 81.0393% ( 140) 00:07:54.319 8116.382 - 8166.794: 81.8469% ( 138) 00:07:54.319 8166.794 - 8217.206: 82.5316% ( 117) 00:07:54.319 8217.206 - 8267.618: 83.3216% ( 135) 00:07:54.319 8267.618 - 8318.031: 84.0882% ( 131) 00:07:54.319 8318.031 - 8368.443: 84.7495% ( 113) 00:07:54.319 8368.443 - 8418.855: 85.1826% ( 74) 00:07:54.319 8418.855 - 8469.268: 85.7502% ( 97) 00:07:54.319 8469.268 - 8519.680: 86.1423% ( 67) 00:07:54.319 8519.680 - 8570.092: 86.4876% ( 59) 00:07:54.319 8570.092 - 8620.505: 86.8036% ( 54) 00:07:54.319 8620.505 - 8670.917: 87.0494% ( 42) 00:07:54.319 8670.917 - 8721.329: 87.3010% ( 43) 00:07:54.319 8721.329 - 8771.742: 87.5527% ( 43) 00:07:54.319 8771.742 - 8822.154: 87.7575% ( 35) 00:07:54.319 8822.154 - 8872.566: 88.0091% ( 43) 00:07:54.319 8872.566 - 8922.978: 88.2491% ( 41) 00:07:54.319 8922.978 - 8973.391: 88.4949% ( 42) 00:07:54.319 8973.391 - 9023.803: 88.7465% ( 43) 00:07:54.319 9023.803 - 9074.215: 89.0274% ( 48) 00:07:54.319 9074.215 - 9124.628: 89.3200% ( 50) 00:07:54.319 9124.628 - 9175.040: 89.6009% ( 48) 00:07:54.319 9175.040 - 9225.452: 89.8642% ( 45) 00:07:54.319 9225.452 - 9275.865: 90.0749% ( 36) 00:07:54.319 9275.865 - 9326.277: 90.2739% ( 34) 00:07:54.319 9326.277 - 9376.689: 90.4319% ( 27) 00:07:54.319 9376.689 - 9427.102: 90.5782% ( 25) 00:07:54.319 9427.102 - 9477.514: 90.7596% ( 31) 00:07:54.319 9477.514 - 9527.926: 91.0112% ( 43) 00:07:54.319 9527.926 - 9578.338: 91.3038% ( 50) 00:07:54.319 9578.338 - 9628.751: 91.5613% ( 44) 00:07:54.319 9628.751 - 9679.163: 91.8247% ( 45) 00:07:54.319 9679.163 - 9729.575: 91.9885% ( 28) 00:07:54.319 9729.575 - 9779.988: 92.1758% ( 32) 00:07:54.319 9779.988 - 9830.400: 92.3397% ( 28) 00:07:54.319 9830.400 - 9880.812: 92.5386% ( 34) 00:07:54.319 9880.812 - 9931.225: 92.7376% ( 34) 00:07:54.319 9931.225 - 9981.637: 92.8663% ( 22) 00:07:54.319 9981.637 - 10032.049: 92.9892% ( 21) 00:07:54.319 10032.049 - 10082.462: 93.1355% ( 25) 00:07:54.319 10082.462 - 10132.874: 93.2994% ( 28) 00:07:54.319 10132.874 - 10183.286: 93.4340% ( 23) 00:07:54.319 10183.286 - 10233.698: 93.6096% ( 30) 00:07:54.319 10233.698 - 10284.111: 93.7266% ( 20) 00:07:54.319 10284.111 - 10334.523: 93.8670% ( 24) 00:07:54.319 10334.523 - 10384.935: 93.9548% ( 15) 00:07:54.319 10384.935 - 10435.348: 94.0485% ( 16) 00:07:54.319 10435.348 - 10485.760: 94.1362% ( 15) 00:07:54.319 10485.760 - 10536.172: 94.2240% ( 15) 00:07:54.319 10536.172 - 10586.585: 94.2708% ( 8) 00:07:54.319 10586.585 - 10636.997: 94.3294% ( 10) 00:07:54.319 10636.997 - 10687.409: 94.3762% ( 8) 00:07:54.319 10687.409 - 10737.822: 94.4230% ( 8) 00:07:54.319 10737.822 - 10788.234: 94.4698% ( 8) 00:07:54.319 10788.234 - 10838.646: 94.4991% ( 5) 00:07:54.319 10838.646 - 10889.058: 94.5517% ( 9) 00:07:54.319 10889.058 - 10939.471: 94.5927% ( 7) 00:07:54.319 10939.471 - 10989.883: 94.6278% ( 6) 00:07:54.319 10989.883 - 11040.295: 94.6688% ( 7) 00:07:54.319 11040.295 - 11090.708: 94.7331% ( 11) 00:07:54.319 11090.708 - 11141.120: 94.7800% ( 8) 00:07:54.319 11141.120 - 11191.532: 94.8151% ( 6) 00:07:54.319 11191.532 - 11241.945: 94.8443% ( 5) 00:07:54.319 11241.945 - 11292.357: 94.8970% ( 9) 00:07:54.319 11292.357 - 11342.769: 94.9321% ( 6) 00:07:54.319 11342.769 - 11393.182: 95.0023% ( 12) 00:07:54.319 11393.182 - 11443.594: 95.0667% ( 11) 00:07:54.319 11443.594 - 11494.006: 95.1252% ( 10) 00:07:54.319 11494.006 - 11544.418: 95.2130% ( 15) 00:07:54.319 11544.418 - 11594.831: 95.2423% ( 5) 00:07:54.319 11594.831 - 11645.243: 95.3535% ( 19) 00:07:54.319 11645.243 - 11695.655: 95.4881% ( 23) 00:07:54.319 11695.655 - 11746.068: 95.5524% ( 11) 00:07:54.319 11746.068 - 11796.480: 95.5993% ( 8) 00:07:54.319 11796.480 - 11846.892: 95.6285% ( 5) 00:07:54.319 11846.892 - 11897.305: 95.6636% ( 6) 00:07:54.319 11897.305 - 11947.717: 95.6753% ( 2) 00:07:54.319 11947.717 - 11998.129: 95.6929% ( 3) 00:07:54.319 11998.129 - 12048.542: 95.7163% ( 4) 00:07:54.319 12048.542 - 12098.954: 95.7338% ( 3) 00:07:54.319 12098.954 - 12149.366: 95.8216% ( 15) 00:07:54.319 12149.366 - 12199.778: 95.8860% ( 11) 00:07:54.319 12199.778 - 12250.191: 95.9679% ( 14) 00:07:54.319 12250.191 - 12300.603: 96.0616% ( 16) 00:07:54.319 12300.603 - 12351.015: 96.1084% ( 8) 00:07:54.319 12351.015 - 12401.428: 96.1669% ( 10) 00:07:54.319 12401.428 - 12451.840: 96.2079% ( 7) 00:07:54.319 12451.840 - 12502.252: 96.2605% ( 9) 00:07:54.319 12502.252 - 12552.665: 96.2956% ( 6) 00:07:54.319 12552.665 - 12603.077: 96.3308% ( 6) 00:07:54.319 12603.077 - 12653.489: 96.3659% ( 6) 00:07:54.319 12653.489 - 12703.902: 96.4127% ( 8) 00:07:54.319 12703.902 - 12754.314: 96.4419% ( 5) 00:07:54.319 12754.314 - 12804.726: 96.4888% ( 8) 00:07:54.319 12804.726 - 12855.138: 96.5473% ( 10) 00:07:54.319 12855.138 - 12905.551: 96.5824% ( 6) 00:07:54.319 12905.551 - 13006.375: 96.6936% ( 19) 00:07:54.319 13006.375 - 13107.200: 96.8457% ( 26) 00:07:54.319 13107.200 - 13208.025: 96.9745% ( 22) 00:07:54.319 13208.025 - 13308.849: 97.0915% ( 20) 00:07:54.319 13308.849 - 13409.674: 97.2612% ( 29) 00:07:54.319 13409.674 - 13510.498: 97.5772% ( 54) 00:07:54.319 13510.498 - 13611.323: 97.7704% ( 33) 00:07:54.319 13611.323 - 13712.148: 97.9050% ( 23) 00:07:54.319 13712.148 - 13812.972: 98.0279% ( 21) 00:07:54.319 13812.972 - 13913.797: 98.2444% ( 37) 00:07:54.319 13913.797 - 14014.622: 98.4375% ( 33) 00:07:54.319 14014.622 - 14115.446: 98.5897% ( 26) 00:07:54.319 14115.446 - 14216.271: 98.6657% ( 13) 00:07:54.319 14216.271 - 14317.095: 98.7301% ( 11) 00:07:54.319 14317.095 - 14417.920: 98.7945% ( 11) 00:07:54.319 14417.920 - 14518.745: 98.8471% ( 9) 00:07:54.319 14518.745 - 14619.569: 98.9057% ( 10) 00:07:54.319 14619.569 - 14720.394: 98.9583% ( 9) 00:07:54.319 14720.394 - 14821.218: 98.9876% ( 5) 00:07:54.319 14821.218 - 14922.043: 99.0227% ( 6) 00:07:54.319 14922.043 - 15022.868: 99.0520% ( 5) 00:07:54.319 15022.868 - 15123.692: 99.0695% ( 3) 00:07:54.319 15123.692 - 15224.517: 99.1046% ( 6) 00:07:54.319 15224.517 - 15325.342: 99.1397% ( 6) 00:07:54.319 15325.342 - 15426.166: 99.1690% ( 5) 00:07:54.319 15426.166 - 15526.991: 99.1807% ( 2) 00:07:54.319 15526.991 - 15627.815: 99.1983% ( 3) 00:07:54.319 15627.815 - 15728.640: 99.2158% ( 3) 00:07:54.319 15728.640 - 15829.465: 99.2334% ( 3) 00:07:54.319 15829.465 - 15930.289: 99.2509% ( 3) 00:07:54.319 25105.329 - 25206.154: 99.2626% ( 2) 00:07:54.319 25206.154 - 25306.978: 99.3153% ( 9) 00:07:54.319 25306.978 - 25407.803: 99.3621% ( 8) 00:07:54.319 25407.803 - 25508.628: 99.3797% ( 3) 00:07:54.319 25508.628 - 25609.452: 99.3972% ( 3) 00:07:54.319 25609.452 - 25710.277: 99.4148% ( 3) 00:07:54.319 25710.277 - 25811.102: 99.4382% ( 4) 00:07:54.319 25811.102 - 26012.751: 99.4675% ( 5) 00:07:54.319 26012.751 - 26214.400: 99.5143% ( 8) 00:07:54.319 26214.400 - 26416.049: 99.5611% ( 8) 00:07:54.319 26416.049 - 26617.698: 99.6079% ( 8) 00:07:54.319 26617.698 - 26819.348: 99.6255% ( 3) 00:07:54.319 29844.086 - 30045.735: 99.6489% ( 4) 00:07:54.319 30045.735 - 30247.385: 99.6957% ( 8) 00:07:54.319 30247.385 - 30449.034: 99.7425% ( 8) 00:07:54.319 30449.034 - 30650.683: 99.7893% ( 8) 00:07:54.319 30650.683 - 30852.332: 99.8303% ( 7) 00:07:54.319 30852.332 - 31053.982: 99.8830% ( 9) 00:07:54.319 31053.982 - 31255.631: 99.9298% ( 8) 00:07:54.319 31255.631 - 31457.280: 99.9707% ( 7) 00:07:54.319 31457.280 - 31658.929: 100.0000% ( 5) 00:07:54.319 00:07:54.319 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:54.319 ============================================================================== 00:07:54.319 Range in us Cumulative IO count 00:07:54.319 5066.437 - 5091.643: 0.0059% ( 1) 00:07:54.319 5167.262 - 5192.468: 0.0117% ( 1) 00:07:54.319 5192.468 - 5217.674: 0.0527% ( 7) 00:07:54.319 5217.674 - 5242.880: 0.0878% ( 6) 00:07:54.319 5242.880 - 5268.086: 0.1170% ( 5) 00:07:54.319 5268.086 - 5293.292: 0.1639% ( 8) 00:07:54.319 5293.292 - 5318.498: 0.1990% ( 6) 00:07:54.319 5318.498 - 5343.705: 0.2458% ( 8) 00:07:54.319 5343.705 - 5368.911: 0.3511% ( 18) 00:07:54.319 5368.911 - 5394.117: 0.6554% ( 52) 00:07:54.319 5394.117 - 5419.323: 0.7725% ( 20) 00:07:54.319 5419.323 - 5444.529: 0.9363% ( 28) 00:07:54.319 5444.529 - 5469.735: 1.1587% ( 38) 00:07:54.319 5469.735 - 5494.942: 1.4279% ( 46) 00:07:54.319 5494.942 - 5520.148: 1.8844% ( 78) 00:07:54.319 5520.148 - 5545.354: 2.1594% ( 47) 00:07:54.319 5545.354 - 5570.560: 2.6978% ( 92) 00:07:54.319 5570.560 - 5595.766: 3.0723% ( 64) 00:07:54.319 5595.766 - 5620.972: 3.2830% ( 36) 00:07:54.319 5620.972 - 5646.178: 3.4352% ( 26) 00:07:54.319 5646.178 - 5671.385: 3.7453% ( 53) 00:07:54.319 5671.385 - 5696.591: 3.9853% ( 41) 00:07:54.319 5696.591 - 5721.797: 4.3598% ( 64) 00:07:54.319 5721.797 - 5747.003: 4.4944% ( 23) 00:07:54.319 5747.003 - 5772.209: 4.6173% ( 21) 00:07:54.319 5772.209 - 5797.415: 4.7343% ( 20) 00:07:54.319 5797.415 - 5822.622: 4.8338% ( 17) 00:07:54.319 5822.622 - 5847.828: 4.9099% ( 13) 00:07:54.319 5847.828 - 5873.034: 5.0328% ( 21) 00:07:54.319 5873.034 - 5898.240: 5.3488% ( 54) 00:07:54.319 5898.240 - 5923.446: 5.4073% ( 10) 00:07:54.319 5923.446 - 5948.652: 5.4424% ( 6) 00:07:54.319 5948.652 - 5973.858: 5.5595% ( 20) 00:07:54.319 5973.858 - 5999.065: 5.6297% ( 12) 00:07:54.319 5999.065 - 6024.271: 5.7233% ( 16) 00:07:54.319 6024.271 - 6049.477: 5.8228% ( 17) 00:07:54.319 6049.477 - 6074.683: 6.0042% ( 31) 00:07:54.319 6074.683 - 6099.889: 6.4841% ( 82) 00:07:54.320 6099.889 - 6125.095: 6.7065% ( 38) 00:07:54.320 6125.095 - 6150.302: 7.2741% ( 97) 00:07:54.320 6150.302 - 6175.508: 7.6955% ( 72) 00:07:54.320 6175.508 - 6200.714: 8.3158% ( 106) 00:07:54.320 6200.714 - 6225.920: 9.3399% ( 175) 00:07:54.320 6225.920 - 6251.126: 10.0421% ( 120) 00:07:54.320 6251.126 - 6276.332: 10.8029% ( 130) 00:07:54.320 6276.332 - 6301.538: 11.6573% ( 146) 00:07:54.320 6301.538 - 6326.745: 12.5644% ( 155) 00:07:54.320 6326.745 - 6351.951: 13.4949% ( 159) 00:07:54.320 6351.951 - 6377.157: 14.6711% ( 201) 00:07:54.320 6377.157 - 6402.363: 15.8357% ( 199) 00:07:54.320 6402.363 - 6427.569: 17.1582% ( 226) 00:07:54.320 6427.569 - 6452.775: 18.5510% ( 238) 00:07:54.320 6452.775 - 6503.188: 20.9387% ( 408) 00:07:54.320 6503.188 - 6553.600: 24.0812% ( 537) 00:07:54.320 6553.600 - 6604.012: 27.4988% ( 584) 00:07:54.320 6604.012 - 6654.425: 31.6713% ( 713) 00:07:54.320 6654.425 - 6704.837: 36.3296% ( 796) 00:07:54.320 6704.837 - 6755.249: 40.6309% ( 735) 00:07:54.320 6755.249 - 6805.662: 46.6000% ( 1020) 00:07:54.320 6805.662 - 6856.074: 50.7959% ( 717) 00:07:54.320 6856.074 - 6906.486: 54.8221% ( 688) 00:07:54.320 6906.486 - 6956.898: 58.6435% ( 653) 00:07:54.320 6956.898 - 7007.311: 61.5812% ( 502) 00:07:54.320 7007.311 - 7057.723: 63.5241% ( 332) 00:07:54.320 7057.723 - 7108.135: 65.3324% ( 309) 00:07:54.320 7108.135 - 7158.548: 66.6901% ( 232) 00:07:54.320 7158.548 - 7208.960: 68.0478% ( 232) 00:07:54.320 7208.960 - 7259.372: 69.1362% ( 186) 00:07:54.320 7259.372 - 7309.785: 70.2247% ( 186) 00:07:54.320 7309.785 - 7360.197: 71.2547% ( 176) 00:07:54.320 7360.197 - 7410.609: 71.8984% ( 110) 00:07:54.320 7410.609 - 7461.022: 72.7762% ( 150) 00:07:54.320 7461.022 - 7511.434: 73.6131% ( 143) 00:07:54.320 7511.434 - 7561.846: 74.3387% ( 124) 00:07:54.320 7561.846 - 7612.258: 74.9122% ( 98) 00:07:54.320 7612.258 - 7662.671: 75.6028% ( 118) 00:07:54.320 7662.671 - 7713.083: 76.0768% ( 81) 00:07:54.320 7713.083 - 7763.495: 76.7790% ( 120) 00:07:54.320 7763.495 - 7813.908: 77.4930% ( 122) 00:07:54.320 7813.908 - 7864.320: 77.9202% ( 73) 00:07:54.320 7864.320 - 7914.732: 78.2947% ( 64) 00:07:54.320 7914.732 - 7965.145: 79.0379% ( 127) 00:07:54.320 7965.145 - 8015.557: 79.4125% ( 64) 00:07:54.320 8015.557 - 8065.969: 79.7987% ( 66) 00:07:54.320 8065.969 - 8116.382: 80.4483% ( 111) 00:07:54.320 8116.382 - 8166.794: 81.4314% ( 168) 00:07:54.320 8166.794 - 8217.206: 82.4848% ( 180) 00:07:54.320 8217.206 - 8267.618: 83.2104% ( 124) 00:07:54.320 8267.618 - 8318.031: 83.7664% ( 95) 00:07:54.320 8318.031 - 8368.443: 84.4686% ( 120) 00:07:54.320 8368.443 - 8418.855: 85.4108% ( 161) 00:07:54.320 8418.855 - 8469.268: 85.9726% ( 96) 00:07:54.320 8469.268 - 8519.680: 86.6515% ( 116) 00:07:54.320 8519.680 - 8570.092: 87.0845% ( 74) 00:07:54.320 8570.092 - 8620.505: 87.4415% ( 61) 00:07:54.320 8620.505 - 8670.917: 87.7458% ( 52) 00:07:54.320 8670.917 - 8721.329: 87.9623% ( 37) 00:07:54.320 8721.329 - 8771.742: 88.1788% ( 37) 00:07:54.320 8771.742 - 8822.154: 88.4831% ( 52) 00:07:54.320 8822.154 - 8872.566: 88.7406% ( 44) 00:07:54.320 8872.566 - 8922.978: 88.9806% ( 41) 00:07:54.320 8922.978 - 8973.391: 89.1795% ( 34) 00:07:54.320 8973.391 - 9023.803: 89.3317% ( 26) 00:07:54.320 9023.803 - 9074.215: 89.5014% ( 29) 00:07:54.320 9074.215 - 9124.628: 89.6301% ( 22) 00:07:54.320 9124.628 - 9175.040: 89.8291% ( 34) 00:07:54.320 9175.040 - 9225.452: 90.0105% ( 31) 00:07:54.320 9225.452 - 9275.865: 90.2037% ( 33) 00:07:54.320 9275.865 - 9326.277: 90.5255% ( 55) 00:07:54.320 9326.277 - 9376.689: 90.7362% ( 36) 00:07:54.320 9376.689 - 9427.102: 90.9293% ( 33) 00:07:54.320 9427.102 - 9477.514: 91.0756% ( 25) 00:07:54.320 9477.514 - 9527.926: 91.2278% ( 26) 00:07:54.320 9527.926 - 9578.338: 91.3448% ( 20) 00:07:54.320 9578.338 - 9628.751: 91.4326% ( 15) 00:07:54.320 9628.751 - 9679.163: 91.5204% ( 15) 00:07:54.320 9679.163 - 9729.575: 91.6023% ( 14) 00:07:54.320 9729.575 - 9779.988: 91.7310% ( 22) 00:07:54.320 9779.988 - 9830.400: 91.8656% ( 23) 00:07:54.320 9830.400 - 9880.812: 92.0002% ( 23) 00:07:54.320 9880.812 - 9931.225: 92.1816% ( 31) 00:07:54.320 9931.225 - 9981.637: 92.3455% ( 28) 00:07:54.320 9981.637 - 10032.049: 92.5386% ( 33) 00:07:54.320 10032.049 - 10082.462: 92.7844% ( 42) 00:07:54.320 10082.462 - 10132.874: 92.9834% ( 34) 00:07:54.320 10132.874 - 10183.286: 93.0829% ( 17) 00:07:54.320 10183.286 - 10233.698: 93.1824% ( 17) 00:07:54.320 10233.698 - 10284.111: 93.2935% ( 19) 00:07:54.320 10284.111 - 10334.523: 93.4106% ( 20) 00:07:54.320 10334.523 - 10384.935: 93.5569% ( 25) 00:07:54.320 10384.935 - 10435.348: 93.7793% ( 38) 00:07:54.320 10435.348 - 10485.760: 93.8670% ( 15) 00:07:54.320 10485.760 - 10536.172: 93.9548% ( 15) 00:07:54.320 10536.172 - 10586.585: 94.0426% ( 15) 00:07:54.320 10586.585 - 10636.997: 94.1187% ( 13) 00:07:54.320 10636.997 - 10687.409: 94.1948% ( 13) 00:07:54.320 10687.409 - 10737.822: 94.2416% ( 8) 00:07:54.320 10737.822 - 10788.234: 94.2884% ( 8) 00:07:54.320 10788.234 - 10838.646: 94.3294% ( 7) 00:07:54.320 10838.646 - 10889.058: 94.3762% ( 8) 00:07:54.320 10889.058 - 10939.471: 94.4288% ( 9) 00:07:54.320 10939.471 - 10989.883: 94.4757% ( 8) 00:07:54.320 10989.883 - 11040.295: 94.5342% ( 10) 00:07:54.320 11040.295 - 11090.708: 94.5985% ( 11) 00:07:54.320 11090.708 - 11141.120: 94.6571% ( 10) 00:07:54.320 11141.120 - 11191.532: 94.7214% ( 11) 00:07:54.320 11191.532 - 11241.945: 94.7624% ( 7) 00:07:54.320 11241.945 - 11292.357: 94.7975% ( 6) 00:07:54.320 11292.357 - 11342.769: 94.8326% ( 6) 00:07:54.320 11342.769 - 11393.182: 94.8736% ( 7) 00:07:54.320 11393.182 - 11443.594: 94.9321% ( 10) 00:07:54.320 11443.594 - 11494.006: 94.9906% ( 10) 00:07:54.320 11494.006 - 11544.418: 95.0784% ( 15) 00:07:54.320 11544.418 - 11594.831: 95.1603% ( 14) 00:07:54.320 11594.831 - 11645.243: 95.2949% ( 23) 00:07:54.320 11645.243 - 11695.655: 95.4412% ( 25) 00:07:54.320 11695.655 - 11746.068: 95.5583% ( 20) 00:07:54.320 11746.068 - 11796.480: 95.6636% ( 18) 00:07:54.320 11796.480 - 11846.892: 95.7924% ( 22) 00:07:54.320 11846.892 - 11897.305: 95.8684% ( 13) 00:07:54.320 11897.305 - 11947.717: 95.9445% ( 13) 00:07:54.320 11947.717 - 11998.129: 95.9855% ( 7) 00:07:54.320 11998.129 - 12048.542: 96.0206% ( 6) 00:07:54.320 12048.542 - 12098.954: 96.0616% ( 7) 00:07:54.320 12098.954 - 12149.366: 96.1318% ( 12) 00:07:54.320 12149.366 - 12199.778: 96.2079% ( 13) 00:07:54.320 12199.778 - 12250.191: 96.2722% ( 11) 00:07:54.320 12250.191 - 12300.603: 96.3308% ( 10) 00:07:54.320 12300.603 - 12351.015: 96.3834% ( 9) 00:07:54.320 12351.015 - 12401.428: 96.4302% ( 8) 00:07:54.320 12401.428 - 12451.840: 96.5005% ( 12) 00:07:54.320 12451.840 - 12502.252: 96.5765% ( 13) 00:07:54.320 12502.252 - 12552.665: 96.6234% ( 8) 00:07:54.320 12552.665 - 12603.077: 96.6585% ( 6) 00:07:54.320 12603.077 - 12653.489: 96.6936% ( 6) 00:07:54.320 12653.489 - 12703.902: 96.7287% ( 6) 00:07:54.320 12703.902 - 12754.314: 96.7638% ( 6) 00:07:54.320 12754.314 - 12804.726: 96.7931% ( 5) 00:07:54.320 12804.726 - 12855.138: 96.8165% ( 4) 00:07:54.320 12855.138 - 12905.551: 96.8457% ( 5) 00:07:54.320 12905.551 - 13006.375: 96.8984% ( 9) 00:07:54.320 13006.375 - 13107.200: 96.9979% ( 17) 00:07:54.320 13107.200 - 13208.025: 97.0915% ( 16) 00:07:54.320 13208.025 - 13308.849: 97.2144% ( 21) 00:07:54.320 13308.849 - 13409.674: 97.2729% ( 10) 00:07:54.320 13409.674 - 13510.498: 97.3724% ( 17) 00:07:54.320 13510.498 - 13611.323: 97.4836% ( 19) 00:07:54.320 13611.323 - 13712.148: 97.8172% ( 57) 00:07:54.320 13712.148 - 13812.972: 98.0396% ( 38) 00:07:54.320 13812.972 - 13913.797: 98.2502% ( 36) 00:07:54.320 13913.797 - 14014.622: 98.3322% ( 14) 00:07:54.320 14014.622 - 14115.446: 98.3965% ( 11) 00:07:54.320 14115.446 - 14216.271: 98.4551% ( 10) 00:07:54.320 14216.271 - 14317.095: 98.5253% ( 12) 00:07:54.320 14317.095 - 14417.920: 98.5838% ( 10) 00:07:54.320 14417.920 - 14518.745: 98.6365% ( 9) 00:07:54.320 14518.745 - 14619.569: 98.7184% ( 14) 00:07:54.320 14619.569 - 14720.394: 98.7535% ( 6) 00:07:54.320 14720.394 - 14821.218: 98.8003% ( 8) 00:07:54.320 14821.218 - 14922.043: 98.8471% ( 8) 00:07:54.320 14922.043 - 15022.868: 98.8764% ( 5) 00:07:54.320 15022.868 - 15123.692: 98.8998% ( 4) 00:07:54.320 15123.692 - 15224.517: 98.9174% ( 3) 00:07:54.320 15224.517 - 15325.342: 98.9466% ( 5) 00:07:54.320 15325.342 - 15426.166: 98.9642% ( 3) 00:07:54.320 15426.166 - 15526.991: 98.9993% ( 6) 00:07:54.320 15526.991 - 15627.815: 99.0169% ( 3) 00:07:54.320 15627.815 - 15728.640: 99.0520% ( 6) 00:07:54.320 15728.640 - 15829.465: 99.1397% ( 15) 00:07:54.320 15829.465 - 15930.289: 99.1632% ( 4) 00:07:54.320 15930.289 - 16031.114: 99.1924% ( 5) 00:07:54.320 16031.114 - 16131.938: 99.2275% ( 6) 00:07:54.320 16131.938 - 16232.763: 99.2509% ( 4) 00:07:54.320 23189.662 - 23290.486: 99.2568% ( 1) 00:07:54.320 23290.486 - 23391.311: 99.2802% ( 4) 00:07:54.320 23391.311 - 23492.135: 99.3036% ( 4) 00:07:54.320 23492.135 - 23592.960: 99.3270% ( 4) 00:07:54.320 23592.960 - 23693.785: 99.3504% ( 4) 00:07:54.320 23693.785 - 23794.609: 99.3738% ( 4) 00:07:54.320 23794.609 - 23895.434: 99.3972% ( 4) 00:07:54.320 23895.434 - 23996.258: 99.4206% ( 4) 00:07:54.320 23996.258 - 24097.083: 99.4499% ( 5) 00:07:54.320 24097.083 - 24197.908: 99.4733% ( 4) 00:07:54.320 24197.908 - 24298.732: 99.4967% ( 4) 00:07:54.320 24298.732 - 24399.557: 99.5201% ( 4) 00:07:54.321 24399.557 - 24500.382: 99.5435% ( 4) 00:07:54.321 24500.382 - 24601.206: 99.5669% ( 4) 00:07:54.321 24601.206 - 24702.031: 99.5962% ( 5) 00:07:54.321 24702.031 - 24802.855: 99.6196% ( 4) 00:07:54.321 24802.855 - 24903.680: 99.6255% ( 1) 00:07:54.321 28029.243 - 28230.892: 99.6489% ( 4) 00:07:54.321 28230.892 - 28432.542: 99.7015% ( 9) 00:07:54.321 28432.542 - 28634.191: 99.7425% ( 7) 00:07:54.321 28634.191 - 28835.840: 99.7952% ( 9) 00:07:54.321 28835.840 - 29037.489: 99.8420% ( 8) 00:07:54.321 29037.489 - 29239.138: 99.8888% ( 8) 00:07:54.321 29239.138 - 29440.788: 99.9415% ( 9) 00:07:54.321 29440.788 - 29642.437: 99.9883% ( 8) 00:07:54.321 29642.437 - 29844.086: 100.0000% ( 2) 00:07:54.321 00:07:54.321 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:54.321 ============================================================================== 00:07:54.321 Range in us Cumulative IO count 00:07:54.321 5116.849 - 5142.055: 0.0117% ( 2) 00:07:54.321 5142.055 - 5167.262: 0.0176% ( 1) 00:07:54.321 5217.674 - 5242.880: 0.0351% ( 3) 00:07:54.321 5268.086 - 5293.292: 0.0761% ( 7) 00:07:54.321 5293.292 - 5318.498: 0.1170% ( 7) 00:07:54.321 5318.498 - 5343.705: 0.1931% ( 13) 00:07:54.321 5343.705 - 5368.911: 0.2926% ( 17) 00:07:54.321 5368.911 - 5394.117: 0.4331% ( 24) 00:07:54.321 5394.117 - 5419.323: 0.6437% ( 36) 00:07:54.321 5419.323 - 5444.529: 0.9597% ( 54) 00:07:54.321 5444.529 - 5469.735: 1.3050% ( 59) 00:07:54.321 5469.735 - 5494.942: 1.5918% ( 49) 00:07:54.321 5494.942 - 5520.148: 2.0482% ( 78) 00:07:54.321 5520.148 - 5545.354: 2.5925% ( 93) 00:07:54.321 5545.354 - 5570.560: 3.1016% ( 87) 00:07:54.321 5570.560 - 5595.766: 3.3883% ( 49) 00:07:54.321 5595.766 - 5620.972: 3.6166% ( 39) 00:07:54.321 5620.972 - 5646.178: 3.9443% ( 56) 00:07:54.321 5646.178 - 5671.385: 4.3832% ( 75) 00:07:54.321 5671.385 - 5696.591: 4.6816% ( 51) 00:07:54.321 5696.591 - 5721.797: 4.9040% ( 38) 00:07:54.321 5721.797 - 5747.003: 5.1381% ( 40) 00:07:54.321 5747.003 - 5772.209: 5.3078% ( 29) 00:07:54.321 5772.209 - 5797.415: 5.4073% ( 17) 00:07:54.321 5797.415 - 5822.622: 5.5126% ( 18) 00:07:54.321 5822.622 - 5847.828: 5.5887% ( 13) 00:07:54.321 5847.828 - 5873.034: 5.6706% ( 14) 00:07:54.321 5873.034 - 5898.240: 5.7760% ( 18) 00:07:54.321 5898.240 - 5923.446: 5.9632% ( 32) 00:07:54.321 5923.446 - 5948.652: 6.0510% ( 15) 00:07:54.321 5948.652 - 5973.858: 6.1388% ( 15) 00:07:54.321 5973.858 - 5999.065: 6.2266% ( 15) 00:07:54.321 5999.065 - 6024.271: 6.3729% ( 25) 00:07:54.321 6024.271 - 6049.477: 6.5250% ( 26) 00:07:54.321 6049.477 - 6074.683: 6.8059% ( 48) 00:07:54.321 6074.683 - 6099.889: 7.2156% ( 70) 00:07:54.321 6099.889 - 6125.095: 7.6545% ( 75) 00:07:54.321 6125.095 - 6150.302: 7.9412% ( 49) 00:07:54.321 6150.302 - 6175.508: 8.5030% ( 96) 00:07:54.321 6175.508 - 6200.714: 9.2287% ( 124) 00:07:54.321 6200.714 - 6225.920: 9.7788% ( 94) 00:07:54.321 6225.920 - 6251.126: 10.3464% ( 97) 00:07:54.321 6251.126 - 6276.332: 10.7971% ( 77) 00:07:54.321 6276.332 - 6301.538: 11.5754% ( 133) 00:07:54.321 6301.538 - 6326.745: 12.3713% ( 136) 00:07:54.321 6326.745 - 6351.951: 13.0618% ( 118) 00:07:54.321 6351.951 - 6377.157: 14.4838% ( 243) 00:07:54.321 6377.157 - 6402.363: 15.7479% ( 216) 00:07:54.321 6402.363 - 6427.569: 17.2460% ( 256) 00:07:54.321 6427.569 - 6452.775: 18.6447% ( 239) 00:07:54.321 6452.775 - 6503.188: 21.9394% ( 563) 00:07:54.321 6503.188 - 6553.600: 25.1639% ( 551) 00:07:54.321 6553.600 - 6604.012: 28.3532% ( 545) 00:07:54.321 6604.012 - 6654.425: 32.3034% ( 675) 00:07:54.321 6654.425 - 6704.837: 36.4349% ( 706) 00:07:54.321 6704.837 - 6755.249: 41.6316% ( 888) 00:07:54.321 6755.249 - 6805.662: 46.6292% ( 854) 00:07:54.321 6805.662 - 6856.074: 50.6028% ( 679) 00:07:54.321 6856.074 - 6906.486: 54.1374% ( 604) 00:07:54.321 6906.486 - 6956.898: 58.1402% ( 684) 00:07:54.321 6956.898 - 7007.311: 61.0077% ( 490) 00:07:54.321 7007.311 - 7057.723: 62.9272% ( 328) 00:07:54.321 7057.723 - 7108.135: 64.9462% ( 345) 00:07:54.321 7108.135 - 7158.548: 66.4560% ( 258) 00:07:54.321 7158.548 - 7208.960: 68.0009% ( 264) 00:07:54.321 7208.960 - 7259.372: 68.9139% ( 156) 00:07:54.321 7259.372 - 7309.785: 69.6629% ( 128) 00:07:54.321 7309.785 - 7360.197: 70.7982% ( 194) 00:07:54.321 7360.197 - 7410.609: 71.5765% ( 133) 00:07:54.321 7410.609 - 7461.022: 72.2963% ( 123) 00:07:54.321 7461.022 - 7511.434: 72.9518% ( 112) 00:07:54.321 7511.434 - 7561.846: 73.6540% ( 120) 00:07:54.321 7561.846 - 7612.258: 74.3387% ( 117) 00:07:54.321 7612.258 - 7662.671: 75.0702% ( 125) 00:07:54.321 7662.671 - 7713.083: 75.7022% ( 108) 00:07:54.321 7713.083 - 7763.495: 76.1119% ( 70) 00:07:54.321 7763.495 - 7813.908: 76.7556% ( 110) 00:07:54.321 7813.908 - 7864.320: 77.4579% ( 120) 00:07:54.321 7864.320 - 7914.732: 78.2186% ( 130) 00:07:54.321 7914.732 - 7965.145: 78.6692% ( 77) 00:07:54.321 7965.145 - 8015.557: 79.3773% ( 121) 00:07:54.321 8015.557 - 8065.969: 80.1615% ( 134) 00:07:54.321 8065.969 - 8116.382: 80.7994% ( 109) 00:07:54.321 8116.382 - 8166.794: 81.5543% ( 129) 00:07:54.321 8166.794 - 8217.206: 82.3268% ( 132) 00:07:54.321 8217.206 - 8267.618: 83.2748% ( 162) 00:07:54.321 8267.618 - 8318.031: 84.3926% ( 191) 00:07:54.321 8318.031 - 8368.443: 85.6332% ( 212) 00:07:54.321 8368.443 - 8418.855: 86.5110% ( 150) 00:07:54.321 8418.855 - 8469.268: 87.0318% ( 89) 00:07:54.321 8469.268 - 8519.680: 87.8102% ( 133) 00:07:54.321 8519.680 - 8570.092: 88.2022% ( 67) 00:07:54.321 8570.092 - 8620.505: 88.5885% ( 66) 00:07:54.321 8620.505 - 8670.917: 88.8577% ( 46) 00:07:54.321 8670.917 - 8721.329: 89.1035% ( 42) 00:07:54.321 8721.329 - 8771.742: 89.2849% ( 31) 00:07:54.321 8771.742 - 8822.154: 89.5131% ( 39) 00:07:54.321 8822.154 - 8872.566: 89.7413% ( 39) 00:07:54.321 8872.566 - 8922.978: 89.9930% ( 43) 00:07:54.321 8922.978 - 8973.391: 90.2095% ( 37) 00:07:54.321 8973.391 - 9023.803: 90.3324% ( 21) 00:07:54.321 9023.803 - 9074.215: 90.3734% ( 7) 00:07:54.321 9074.215 - 9124.628: 90.4260% ( 9) 00:07:54.321 9124.628 - 9175.040: 90.4846% ( 10) 00:07:54.321 9175.040 - 9225.452: 90.5548% ( 12) 00:07:54.321 9225.452 - 9275.865: 90.6250% ( 12) 00:07:54.321 9275.865 - 9326.277: 90.7069% ( 14) 00:07:54.321 9326.277 - 9376.689: 90.7889% ( 14) 00:07:54.321 9376.689 - 9427.102: 90.8708% ( 14) 00:07:54.321 9427.102 - 9477.514: 90.9820% ( 19) 00:07:54.321 9477.514 - 9527.926: 91.0463% ( 11) 00:07:54.321 9527.926 - 9578.338: 91.1224% ( 13) 00:07:54.321 9578.338 - 9628.751: 91.2102% ( 15) 00:07:54.321 9628.751 - 9679.163: 91.2746% ( 11) 00:07:54.321 9679.163 - 9729.575: 91.3799% ( 18) 00:07:54.321 9729.575 - 9779.988: 91.5496% ( 29) 00:07:54.321 9779.988 - 9830.400: 91.7837% ( 40) 00:07:54.321 9830.400 - 9880.812: 91.8715% ( 15) 00:07:54.321 9880.812 - 9931.225: 91.9593% ( 15) 00:07:54.321 9931.225 - 9981.637: 92.0471% ( 15) 00:07:54.321 9981.637 - 10032.049: 92.1290% ( 14) 00:07:54.321 10032.049 - 10082.462: 92.2168% ( 15) 00:07:54.321 10082.462 - 10132.874: 92.3397% ( 21) 00:07:54.321 10132.874 - 10183.286: 92.4508% ( 19) 00:07:54.321 10183.286 - 10233.698: 92.5562% ( 18) 00:07:54.321 10233.698 - 10284.111: 92.6557% ( 17) 00:07:54.321 10284.111 - 10334.523: 92.7317% ( 13) 00:07:54.321 10334.523 - 10384.935: 92.7786% ( 8) 00:07:54.321 10384.935 - 10435.348: 92.8137% ( 6) 00:07:54.321 10435.348 - 10485.760: 92.8371% ( 4) 00:07:54.321 10485.760 - 10536.172: 92.8663% ( 5) 00:07:54.321 10536.172 - 10586.585: 92.9015% ( 6) 00:07:54.321 10586.585 - 10636.997: 92.9424% ( 7) 00:07:54.321 10636.997 - 10687.409: 92.9951% ( 9) 00:07:54.321 10687.409 - 10737.822: 93.1121% ( 20) 00:07:54.321 10737.822 - 10788.234: 93.2526% ( 24) 00:07:54.321 10788.234 - 10838.646: 93.2994% ( 8) 00:07:54.321 10838.646 - 10889.058: 93.3228% ( 4) 00:07:54.321 10889.058 - 10939.471: 93.3696% ( 8) 00:07:54.321 10939.471 - 10989.883: 93.4164% ( 8) 00:07:54.321 10989.883 - 11040.295: 93.4808% ( 11) 00:07:54.321 11040.295 - 11090.708: 93.5861% ( 18) 00:07:54.321 11090.708 - 11141.120: 93.7266% ( 24) 00:07:54.321 11141.120 - 11191.532: 93.9256% ( 34) 00:07:54.321 11191.532 - 11241.945: 94.0250% ( 17) 00:07:54.321 11241.945 - 11292.357: 94.1479% ( 21) 00:07:54.321 11292.357 - 11342.769: 94.3059% ( 27) 00:07:54.321 11342.769 - 11393.182: 94.4640% ( 27) 00:07:54.321 11393.182 - 11443.594: 94.6571% ( 33) 00:07:54.321 11443.594 - 11494.006: 95.0257% ( 63) 00:07:54.321 11494.006 - 11544.418: 95.1721% ( 25) 00:07:54.321 11544.418 - 11594.831: 95.3066% ( 23) 00:07:54.321 11594.831 - 11645.243: 95.4705% ( 28) 00:07:54.321 11645.243 - 11695.655: 95.6285% ( 27) 00:07:54.321 11695.655 - 11746.068: 95.7631% ( 23) 00:07:54.321 11746.068 - 11796.480: 95.8977% ( 23) 00:07:54.321 11796.480 - 11846.892: 96.0030% ( 18) 00:07:54.321 11846.892 - 11897.305: 96.1142% ( 19) 00:07:54.321 11897.305 - 11947.717: 96.2313% ( 20) 00:07:54.321 11947.717 - 11998.129: 96.3483% ( 20) 00:07:54.321 11998.129 - 12048.542: 96.4946% ( 25) 00:07:54.321 12048.542 - 12098.954: 96.5941% ( 17) 00:07:54.321 12098.954 - 12149.366: 96.6702% ( 13) 00:07:54.321 12149.366 - 12199.778: 96.7404% ( 12) 00:07:54.321 12199.778 - 12250.191: 96.8048% ( 11) 00:07:54.321 12250.191 - 12300.603: 96.8574% ( 9) 00:07:54.321 12300.603 - 12351.015: 96.8984% ( 7) 00:07:54.321 12351.015 - 12401.428: 96.9218% ( 4) 00:07:54.322 12401.428 - 12451.840: 96.9335% ( 2) 00:07:54.322 12451.840 - 12502.252: 96.9511% ( 3) 00:07:54.322 12502.252 - 12552.665: 96.9803% ( 5) 00:07:54.322 12552.665 - 12603.077: 96.9979% ( 3) 00:07:54.322 12603.077 - 12653.489: 97.0213% ( 4) 00:07:54.322 12653.489 - 12703.902: 97.0389% ( 3) 00:07:54.322 12703.902 - 12754.314: 97.0506% ( 2) 00:07:54.322 12754.314 - 12804.726: 97.0681% ( 3) 00:07:54.322 12804.726 - 12855.138: 97.0857% ( 3) 00:07:54.322 12855.138 - 12905.551: 97.1500% ( 11) 00:07:54.322 12905.551 - 13006.375: 97.2495% ( 17) 00:07:54.322 13006.375 - 13107.200: 97.2671% ( 3) 00:07:54.322 13107.200 - 13208.025: 97.2846% ( 3) 00:07:54.322 13208.025 - 13308.849: 97.3081% ( 4) 00:07:54.322 13308.849 - 13409.674: 97.3256% ( 3) 00:07:54.322 13409.674 - 13510.498: 97.3432% ( 3) 00:07:54.322 13510.498 - 13611.323: 97.3841% ( 7) 00:07:54.322 13611.323 - 13712.148: 97.4661% ( 14) 00:07:54.322 13712.148 - 13812.972: 97.5948% ( 22) 00:07:54.322 13812.972 - 13913.797: 97.9225% ( 56) 00:07:54.322 13913.797 - 14014.622: 98.1800% ( 44) 00:07:54.322 14014.622 - 14115.446: 98.2678% ( 15) 00:07:54.322 14115.446 - 14216.271: 98.3029% ( 6) 00:07:54.322 14216.271 - 14317.095: 98.3263% ( 4) 00:07:54.322 14317.095 - 14417.920: 98.3614% ( 6) 00:07:54.322 14417.920 - 14518.745: 98.4199% ( 10) 00:07:54.322 14518.745 - 14619.569: 98.4726% ( 9) 00:07:54.322 14619.569 - 14720.394: 98.5487% ( 13) 00:07:54.322 14720.394 - 14821.218: 98.8003% ( 43) 00:07:54.322 14821.218 - 14922.043: 98.8471% ( 8) 00:07:54.322 14922.043 - 15022.868: 98.8823% ( 6) 00:07:54.322 15022.868 - 15123.692: 98.9525% ( 12) 00:07:54.322 15123.692 - 15224.517: 99.0051% ( 9) 00:07:54.322 15224.517 - 15325.342: 99.0637% ( 10) 00:07:54.322 15325.342 - 15426.166: 99.0988% ( 6) 00:07:54.322 15426.166 - 15526.991: 99.1397% ( 7) 00:07:54.322 15526.991 - 15627.815: 99.1690% ( 5) 00:07:54.322 15627.815 - 15728.640: 99.2041% ( 6) 00:07:54.322 15728.640 - 15829.465: 99.2451% ( 7) 00:07:54.322 15829.465 - 15930.289: 99.2509% ( 1) 00:07:54.322 22080.591 - 22181.415: 99.2743% ( 4) 00:07:54.322 22181.415 - 22282.240: 99.2978% ( 4) 00:07:54.322 22282.240 - 22383.065: 99.3212% ( 4) 00:07:54.322 22383.065 - 22483.889: 99.3446% ( 4) 00:07:54.322 22483.889 - 22584.714: 99.3738% ( 5) 00:07:54.322 22584.714 - 22685.538: 99.3972% ( 4) 00:07:54.322 22685.538 - 22786.363: 99.4206% ( 4) 00:07:54.322 22786.363 - 22887.188: 99.4382% ( 3) 00:07:54.322 22887.188 - 22988.012: 99.4616% ( 4) 00:07:54.322 22988.012 - 23088.837: 99.4909% ( 5) 00:07:54.322 23088.837 - 23189.662: 99.5084% ( 3) 00:07:54.322 23189.662 - 23290.486: 99.5318% ( 4) 00:07:54.322 23290.486 - 23391.311: 99.5611% ( 5) 00:07:54.322 23391.311 - 23492.135: 99.5845% ( 4) 00:07:54.322 23492.135 - 23592.960: 99.6079% ( 4) 00:07:54.322 23592.960 - 23693.785: 99.6255% ( 3) 00:07:54.322 26617.698 - 26819.348: 99.6372% ( 2) 00:07:54.322 26819.348 - 27020.997: 99.6840% ( 8) 00:07:54.322 27020.997 - 27222.646: 99.7308% ( 8) 00:07:54.322 27222.646 - 27424.295: 99.7776% ( 8) 00:07:54.322 27424.295 - 27625.945: 99.8186% ( 7) 00:07:54.322 27625.945 - 27827.594: 99.8654% ( 8) 00:07:54.322 27827.594 - 28029.243: 99.9181% ( 9) 00:07:54.322 28029.243 - 28230.892: 99.9649% ( 8) 00:07:54.322 28230.892 - 28432.542: 100.0000% ( 6) 00:07:54.322 00:07:54.322 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:54.322 ============================================================================== 00:07:54.322 Range in us Cumulative IO count 00:07:54.322 5066.437 - 5091.643: 0.0059% ( 1) 00:07:54.322 5091.643 - 5116.849: 0.0117% ( 1) 00:07:54.322 5167.262 - 5192.468: 0.0234% ( 2) 00:07:54.322 5192.468 - 5217.674: 0.0410% ( 3) 00:07:54.322 5217.674 - 5242.880: 0.0644% ( 4) 00:07:54.322 5242.880 - 5268.086: 0.1346% ( 12) 00:07:54.322 5268.086 - 5293.292: 0.1931% ( 10) 00:07:54.322 5293.292 - 5318.498: 0.2809% ( 15) 00:07:54.322 5318.498 - 5343.705: 0.4623% ( 31) 00:07:54.322 5343.705 - 5368.911: 0.5852% ( 21) 00:07:54.322 5368.911 - 5394.117: 0.7608% ( 30) 00:07:54.322 5394.117 - 5419.323: 0.9831% ( 38) 00:07:54.322 5419.323 - 5444.529: 1.1236% ( 24) 00:07:54.322 5444.529 - 5469.735: 1.3284% ( 35) 00:07:54.322 5469.735 - 5494.942: 1.4981% ( 29) 00:07:54.322 5494.942 - 5520.148: 1.6912% ( 33) 00:07:54.322 5520.148 - 5545.354: 2.0248% ( 57) 00:07:54.322 5545.354 - 5570.560: 2.2530% ( 39) 00:07:54.322 5570.560 - 5595.766: 2.5515% ( 51) 00:07:54.322 5595.766 - 5620.972: 2.9846% ( 74) 00:07:54.322 5620.972 - 5646.178: 3.3649% ( 65) 00:07:54.322 5646.178 - 5671.385: 4.0028% ( 109) 00:07:54.322 5671.385 - 5696.591: 4.3832% ( 65) 00:07:54.322 5696.591 - 5721.797: 4.6173% ( 40) 00:07:54.322 5721.797 - 5747.003: 4.7753% ( 27) 00:07:54.322 5747.003 - 5772.209: 5.0503% ( 47) 00:07:54.322 5772.209 - 5797.415: 5.1615% ( 19) 00:07:54.322 5797.415 - 5822.622: 5.2669% ( 18) 00:07:54.322 5822.622 - 5847.828: 5.3546% ( 15) 00:07:54.322 5847.828 - 5873.034: 5.4307% ( 13) 00:07:54.322 5873.034 - 5898.240: 5.4834% ( 9) 00:07:54.322 5898.240 - 5923.446: 5.5887% ( 18) 00:07:54.322 5923.446 - 5948.652: 5.6589% ( 12) 00:07:54.322 5948.652 - 5973.858: 5.7643% ( 18) 00:07:54.322 5973.858 - 5999.065: 5.8462% ( 14) 00:07:54.322 5999.065 - 6024.271: 5.9632% ( 20) 00:07:54.322 6024.271 - 6049.477: 6.1447% ( 31) 00:07:54.322 6049.477 - 6074.683: 6.5719% ( 73) 00:07:54.322 6074.683 - 6099.889: 6.9230% ( 60) 00:07:54.322 6099.889 - 6125.095: 7.2273% ( 52) 00:07:54.322 6125.095 - 6150.302: 7.8301% ( 103) 00:07:54.322 6150.302 - 6175.508: 8.2046% ( 64) 00:07:54.322 6175.508 - 6200.714: 8.7605% ( 95) 00:07:54.322 6200.714 - 6225.920: 9.3926% ( 108) 00:07:54.322 6225.920 - 6251.126: 10.1592% ( 131) 00:07:54.322 6251.126 - 6276.332: 10.6449% ( 83) 00:07:54.322 6276.332 - 6301.538: 11.2301% ( 100) 00:07:54.322 6301.538 - 6326.745: 12.1957% ( 165) 00:07:54.322 6326.745 - 6351.951: 13.1028% ( 155) 00:07:54.322 6351.951 - 6377.157: 14.2498% ( 196) 00:07:54.322 6377.157 - 6402.363: 15.5957% ( 230) 00:07:54.322 6402.363 - 6427.569: 16.8598% ( 216) 00:07:54.322 6427.569 - 6452.775: 18.3052% ( 247) 00:07:54.322 6452.775 - 6503.188: 20.7982% ( 426) 00:07:54.322 6503.188 - 6553.600: 23.5487% ( 470) 00:07:54.322 6553.600 - 6604.012: 27.7797% ( 723) 00:07:54.322 6604.012 - 6654.425: 31.8001% ( 687) 00:07:54.322 6654.425 - 6704.837: 36.5578% ( 813) 00:07:54.322 6704.837 - 6755.249: 40.9118% ( 744) 00:07:54.322 6755.249 - 6805.662: 46.8282% ( 1011) 00:07:54.322 6805.662 - 6856.074: 50.8251% ( 683) 00:07:54.322 6856.074 - 6906.486: 54.1959% ( 576) 00:07:54.322 6906.486 - 6956.898: 57.7306% ( 604) 00:07:54.322 6956.898 - 7007.311: 60.7210% ( 511) 00:07:54.322 7007.311 - 7057.723: 63.0150% ( 392) 00:07:54.322 7057.723 - 7108.135: 65.3090% ( 392) 00:07:54.322 7108.135 - 7158.548: 66.6608% ( 231) 00:07:54.322 7158.548 - 7208.960: 67.8078% ( 196) 00:07:54.322 7208.960 - 7259.372: 69.0016% ( 204) 00:07:54.322 7259.372 - 7309.785: 69.8619% ( 147) 00:07:54.322 7309.785 - 7360.197: 71.0089% ( 196) 00:07:54.322 7360.197 - 7410.609: 71.7521% ( 127) 00:07:54.322 7410.609 - 7461.022: 72.4953% ( 127) 00:07:54.322 7461.022 - 7511.434: 73.1742% ( 116) 00:07:54.322 7511.434 - 7561.846: 73.9349% ( 130) 00:07:54.322 7561.846 - 7612.258: 74.3738% ( 75) 00:07:54.322 7612.258 - 7662.671: 74.9356% ( 96) 00:07:54.322 7662.671 - 7713.083: 75.6262% ( 118) 00:07:54.322 7713.083 - 7763.495: 76.2114% ( 100) 00:07:54.322 7763.495 - 7813.908: 77.1009% ( 152) 00:07:54.322 7813.908 - 7864.320: 77.7037% ( 103) 00:07:54.322 7864.320 - 7914.732: 78.2596% ( 95) 00:07:54.322 7914.732 - 7965.145: 78.8624% ( 103) 00:07:54.322 7965.145 - 8015.557: 79.2720% ( 70) 00:07:54.322 8015.557 - 8065.969: 79.9801% ( 121) 00:07:54.322 8065.969 - 8116.382: 81.1915% ( 207) 00:07:54.322 8116.382 - 8166.794: 82.1746% ( 168) 00:07:54.322 8166.794 - 8217.206: 83.0934% ( 157) 00:07:54.322 8217.206 - 8267.618: 84.0180% ( 158) 00:07:54.323 8267.618 - 8318.031: 84.9836% ( 165) 00:07:54.323 8318.031 - 8368.443: 86.3413% ( 232) 00:07:54.323 8368.443 - 8418.855: 87.0260% ( 117) 00:07:54.323 8418.855 - 8469.268: 87.7926% ( 131) 00:07:54.323 8469.268 - 8519.680: 88.4129% ( 106) 00:07:54.323 8519.680 - 8570.092: 88.7992% ( 66) 00:07:54.323 8570.092 - 8620.505: 89.0157% ( 37) 00:07:54.323 8620.505 - 8670.917: 89.2381% ( 38) 00:07:54.323 8670.917 - 8721.329: 89.4019% ( 28) 00:07:54.323 8721.329 - 8771.742: 89.5424% ( 24) 00:07:54.323 8771.742 - 8822.154: 89.8174% ( 47) 00:07:54.323 8822.154 - 8872.566: 89.9813% ( 28) 00:07:54.323 8872.566 - 8922.978: 90.1100% ( 22) 00:07:54.323 8922.978 - 8973.391: 90.2563% ( 25) 00:07:54.323 8973.391 - 9023.803: 90.4085% ( 26) 00:07:54.323 9023.803 - 9074.215: 90.5665% ( 27) 00:07:54.323 9074.215 - 9124.628: 90.6484% ( 14) 00:07:54.323 9124.628 - 9175.040: 90.8357% ( 32) 00:07:54.323 9175.040 - 9225.452: 90.8708% ( 6) 00:07:54.323 9225.452 - 9275.865: 90.9059% ( 6) 00:07:54.323 9275.865 - 9326.277: 90.9293% ( 4) 00:07:54.323 9326.277 - 9376.689: 90.9527% ( 4) 00:07:54.323 9376.689 - 9427.102: 91.0054% ( 9) 00:07:54.323 9427.102 - 9477.514: 91.0815% ( 13) 00:07:54.323 9477.514 - 9527.926: 91.1634% ( 14) 00:07:54.323 9527.926 - 9578.338: 91.2161% ( 9) 00:07:54.323 9578.338 - 9628.751: 91.2629% ( 8) 00:07:54.323 9628.751 - 9679.163: 91.2921% ( 5) 00:07:54.323 9679.163 - 9729.575: 91.3214% ( 5) 00:07:54.323 9729.575 - 9779.988: 91.3565% ( 6) 00:07:54.323 9779.988 - 9830.400: 91.3858% ( 5) 00:07:54.323 9830.400 - 9880.812: 91.4267% ( 7) 00:07:54.323 9880.812 - 9931.225: 91.4560% ( 5) 00:07:54.323 9931.225 - 9981.637: 91.4911% ( 6) 00:07:54.323 9981.637 - 10032.049: 91.5321% ( 7) 00:07:54.323 10032.049 - 10082.462: 91.5847% ( 9) 00:07:54.323 10082.462 - 10132.874: 91.6433% ( 10) 00:07:54.323 10132.874 - 10183.286: 91.6959% ( 9) 00:07:54.323 10183.286 - 10233.698: 91.7603% ( 11) 00:07:54.323 10233.698 - 10284.111: 91.8422% ( 14) 00:07:54.323 10284.111 - 10334.523: 91.9300% ( 15) 00:07:54.323 10334.523 - 10384.935: 92.0178% ( 15) 00:07:54.323 10384.935 - 10435.348: 92.0997% ( 14) 00:07:54.323 10435.348 - 10485.760: 92.1816% ( 14) 00:07:54.323 10485.760 - 10536.172: 92.2694% ( 15) 00:07:54.323 10536.172 - 10586.585: 92.3865% ( 20) 00:07:54.323 10586.585 - 10636.997: 92.5386% ( 26) 00:07:54.323 10636.997 - 10687.409: 92.6849% ( 25) 00:07:54.323 10687.409 - 10737.822: 92.9249% ( 41) 00:07:54.323 10737.822 - 10788.234: 93.1531% ( 39) 00:07:54.323 10788.234 - 10838.646: 93.3287% ( 30) 00:07:54.323 10838.646 - 10889.058: 93.4808% ( 26) 00:07:54.323 10889.058 - 10939.471: 93.6505% ( 29) 00:07:54.323 10939.471 - 10989.883: 93.8144% ( 28) 00:07:54.323 10989.883 - 11040.295: 93.9665% ( 26) 00:07:54.323 11040.295 - 11090.708: 94.1128% ( 25) 00:07:54.323 11090.708 - 11141.120: 94.2533% ( 24) 00:07:54.323 11141.120 - 11191.532: 94.3703% ( 20) 00:07:54.323 11191.532 - 11241.945: 94.4522% ( 14) 00:07:54.323 11241.945 - 11292.357: 94.5576% ( 18) 00:07:54.323 11292.357 - 11342.769: 94.6980% ( 24) 00:07:54.323 11342.769 - 11393.182: 94.8034% ( 18) 00:07:54.323 11393.182 - 11443.594: 94.8912% ( 15) 00:07:54.323 11443.594 - 11494.006: 94.9848% ( 16) 00:07:54.323 11494.006 - 11544.418: 95.1018% ( 20) 00:07:54.323 11544.418 - 11594.831: 95.1955% ( 16) 00:07:54.323 11594.831 - 11645.243: 95.3710% ( 30) 00:07:54.323 11645.243 - 11695.655: 95.4354% ( 11) 00:07:54.323 11695.655 - 11746.068: 95.5290% ( 16) 00:07:54.323 11746.068 - 11796.480: 95.5817% ( 9) 00:07:54.323 11796.480 - 11846.892: 95.6344% ( 9) 00:07:54.323 11846.892 - 11897.305: 95.6929% ( 10) 00:07:54.323 11897.305 - 11947.717: 95.7807% ( 15) 00:07:54.323 11947.717 - 11998.129: 95.8919% ( 19) 00:07:54.323 11998.129 - 12048.542: 96.1786% ( 49) 00:07:54.323 12048.542 - 12098.954: 96.3191% ( 24) 00:07:54.323 12098.954 - 12149.366: 96.4068% ( 15) 00:07:54.323 12149.366 - 12199.778: 96.4888% ( 14) 00:07:54.323 12199.778 - 12250.191: 96.5765% ( 15) 00:07:54.323 12250.191 - 12300.603: 96.6409% ( 11) 00:07:54.323 12300.603 - 12351.015: 96.6936% ( 9) 00:07:54.323 12351.015 - 12401.428: 96.7346% ( 7) 00:07:54.323 12401.428 - 12451.840: 96.7872% ( 9) 00:07:54.323 12451.840 - 12502.252: 96.8282% ( 7) 00:07:54.323 12502.252 - 12552.665: 96.8809% ( 9) 00:07:54.323 12552.665 - 12603.077: 96.9394% ( 10) 00:07:54.323 12603.077 - 12653.489: 97.0037% ( 11) 00:07:54.323 12653.489 - 12703.902: 97.0564% ( 9) 00:07:54.323 12703.902 - 12754.314: 97.1149% ( 10) 00:07:54.323 12754.314 - 12804.726: 97.1793% ( 11) 00:07:54.323 12804.726 - 12855.138: 97.2554% ( 13) 00:07:54.323 12855.138 - 12905.551: 97.3256% ( 12) 00:07:54.323 12905.551 - 13006.375: 97.4661% ( 24) 00:07:54.323 13006.375 - 13107.200: 97.5890% ( 21) 00:07:54.323 13107.200 - 13208.025: 97.7411% ( 26) 00:07:54.323 13208.025 - 13308.849: 97.8757% ( 23) 00:07:54.323 13308.849 - 13409.674: 97.9225% ( 8) 00:07:54.323 13409.674 - 13510.498: 97.9693% ( 8) 00:07:54.323 13510.498 - 13611.323: 98.0044% ( 6) 00:07:54.323 13611.323 - 13712.148: 98.0337% ( 5) 00:07:54.323 13712.148 - 13812.972: 98.0805% ( 8) 00:07:54.323 13812.972 - 13913.797: 98.1039% ( 4) 00:07:54.323 13913.797 - 14014.622: 98.1215% ( 3) 00:07:54.323 14014.622 - 14115.446: 98.1273% ( 1) 00:07:54.323 14115.446 - 14216.271: 98.1332% ( 1) 00:07:54.323 14317.095 - 14417.920: 98.1390% ( 1) 00:07:54.323 14417.920 - 14518.745: 98.1566% ( 3) 00:07:54.323 14518.745 - 14619.569: 98.1859% ( 5) 00:07:54.323 14619.569 - 14720.394: 98.2093% ( 4) 00:07:54.323 14720.394 - 14821.218: 98.2444% ( 6) 00:07:54.323 14821.218 - 14922.043: 98.4492% ( 35) 00:07:54.323 14922.043 - 15022.868: 98.7125% ( 45) 00:07:54.323 15022.868 - 15123.692: 98.9408% ( 39) 00:07:54.323 15123.692 - 15224.517: 99.0754% ( 23) 00:07:54.323 15224.517 - 15325.342: 99.1515% ( 13) 00:07:54.323 15325.342 - 15426.166: 99.1807% ( 5) 00:07:54.323 15426.166 - 15526.991: 99.2158% ( 6) 00:07:54.323 15526.991 - 15627.815: 99.2509% ( 6) 00:07:54.323 20366.572 - 20467.397: 99.2743% ( 4) 00:07:54.323 20467.397 - 20568.222: 99.3036% ( 5) 00:07:54.323 20568.222 - 20669.046: 99.3270% ( 4) 00:07:54.323 20669.046 - 20769.871: 99.3504% ( 4) 00:07:54.323 20769.871 - 20870.695: 99.3738% ( 4) 00:07:54.323 20870.695 - 20971.520: 99.3972% ( 4) 00:07:54.323 20971.520 - 21072.345: 99.4206% ( 4) 00:07:54.323 21072.345 - 21173.169: 99.4441% ( 4) 00:07:54.323 21173.169 - 21273.994: 99.4675% ( 4) 00:07:54.323 21273.994 - 21374.818: 99.4967% ( 5) 00:07:54.323 21374.818 - 21475.643: 99.5201% ( 4) 00:07:54.323 21475.643 - 21576.468: 99.5435% ( 4) 00:07:54.323 21576.468 - 21677.292: 99.5669% ( 4) 00:07:54.323 21677.292 - 21778.117: 99.5904% ( 4) 00:07:54.323 21778.117 - 21878.942: 99.6138% ( 4) 00:07:54.323 21878.942 - 21979.766: 99.6255% ( 2) 00:07:54.323 24903.680 - 25004.505: 99.6372% ( 2) 00:07:54.323 25004.505 - 25105.329: 99.6606% ( 4) 00:07:54.323 25105.329 - 25206.154: 99.6840% ( 4) 00:07:54.323 25206.154 - 25306.978: 99.7074% ( 4) 00:07:54.323 25306.978 - 25407.803: 99.7308% ( 4) 00:07:54.323 25407.803 - 25508.628: 99.7542% ( 4) 00:07:54.323 25508.628 - 25609.452: 99.7776% ( 4) 00:07:54.323 25609.452 - 25710.277: 99.8069% ( 5) 00:07:54.323 25710.277 - 25811.102: 99.8303% ( 4) 00:07:54.323 25811.102 - 26012.751: 99.8771% ( 8) 00:07:54.323 26012.751 - 26214.400: 99.9239% ( 8) 00:07:54.323 26214.400 - 26416.049: 99.9707% ( 8) 00:07:54.323 26416.049 - 26617.698: 100.0000% ( 5) 00:07:54.323 00:07:54.323 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:54.323 ============================================================================== 00:07:54.323 Range in us Cumulative IO count 00:07:54.323 5066.437 - 5091.643: 0.0059% ( 1) 00:07:54.323 5091.643 - 5116.849: 0.0117% ( 1) 00:07:54.323 5192.468 - 5217.674: 0.0176% ( 1) 00:07:54.323 5217.674 - 5242.880: 0.0293% ( 2) 00:07:54.323 5242.880 - 5268.086: 0.0527% ( 4) 00:07:54.323 5268.086 - 5293.292: 0.1053% ( 9) 00:07:54.323 5293.292 - 5318.498: 0.1873% ( 14) 00:07:54.323 5318.498 - 5343.705: 0.2633% ( 13) 00:07:54.323 5343.705 - 5368.911: 0.3862% ( 21) 00:07:54.323 5368.911 - 5394.117: 0.7198% ( 57) 00:07:54.323 5394.117 - 5419.323: 0.8368% ( 20) 00:07:54.323 5419.323 - 5444.529: 1.0768% ( 41) 00:07:54.323 5444.529 - 5469.735: 1.3109% ( 40) 00:07:54.323 5469.735 - 5494.942: 1.4572% ( 25) 00:07:54.323 5494.942 - 5520.148: 1.6795% ( 38) 00:07:54.323 5520.148 - 5545.354: 1.9604% ( 48) 00:07:54.323 5545.354 - 5570.560: 2.2121% ( 43) 00:07:54.323 5570.560 - 5595.766: 2.5866% ( 64) 00:07:54.323 5595.766 - 5620.972: 3.0782% ( 84) 00:07:54.323 5620.972 - 5646.178: 3.6926% ( 105) 00:07:54.323 5646.178 - 5671.385: 4.1901% ( 85) 00:07:54.323 5671.385 - 5696.591: 4.4066% ( 37) 00:07:54.323 5696.591 - 5721.797: 4.5997% ( 33) 00:07:54.323 5721.797 - 5747.003: 4.7577% ( 27) 00:07:54.323 5747.003 - 5772.209: 4.9333% ( 30) 00:07:54.323 5772.209 - 5797.415: 5.1323% ( 34) 00:07:54.323 5797.415 - 5822.622: 5.2317% ( 17) 00:07:54.323 5822.622 - 5847.828: 5.2961% ( 11) 00:07:54.323 5847.828 - 5873.034: 5.3546% ( 10) 00:07:54.323 5873.034 - 5898.240: 5.5887% ( 40) 00:07:54.323 5898.240 - 5923.446: 5.6648% ( 13) 00:07:54.323 5923.446 - 5948.652: 5.7701% ( 18) 00:07:54.323 5948.652 - 5973.858: 5.8872% ( 20) 00:07:54.323 5973.858 - 5999.065: 6.0744% ( 32) 00:07:54.324 5999.065 - 6024.271: 6.2149% ( 24) 00:07:54.324 6024.271 - 6049.477: 6.4548% ( 41) 00:07:54.324 6049.477 - 6074.683: 6.8294% ( 64) 00:07:54.324 6074.683 - 6099.889: 7.1103% ( 48) 00:07:54.324 6099.889 - 6125.095: 7.4438% ( 57) 00:07:54.324 6125.095 - 6150.302: 7.8359% ( 67) 00:07:54.324 6150.302 - 6175.508: 8.3099% ( 81) 00:07:54.324 6175.508 - 6200.714: 8.6610% ( 60) 00:07:54.324 6200.714 - 6225.920: 9.2872% ( 107) 00:07:54.324 6225.920 - 6251.126: 10.2821% ( 170) 00:07:54.324 6251.126 - 6276.332: 10.9258% ( 110) 00:07:54.324 6276.332 - 6301.538: 11.3940% ( 80) 00:07:54.324 6301.538 - 6326.745: 12.0435% ( 111) 00:07:54.324 6326.745 - 6351.951: 12.9155% ( 149) 00:07:54.324 6351.951 - 6377.157: 14.0801% ( 199) 00:07:54.324 6377.157 - 6402.363: 15.0515% ( 166) 00:07:54.324 6402.363 - 6427.569: 16.4853% ( 245) 00:07:54.324 6427.569 - 6452.775: 17.5679% ( 185) 00:07:54.324 6452.775 - 6503.188: 20.2130% ( 452) 00:07:54.324 6503.188 - 6553.600: 23.1449% ( 501) 00:07:54.324 6553.600 - 6604.012: 27.2882% ( 708) 00:07:54.324 6604.012 - 6654.425: 31.6187% ( 740) 00:07:54.324 6654.425 - 6704.837: 36.3413% ( 807) 00:07:54.324 6704.837 - 6755.249: 40.5899% ( 726) 00:07:54.324 6755.249 - 6805.662: 46.2664% ( 970) 00:07:54.324 6805.662 - 6856.074: 50.8251% ( 779) 00:07:54.324 6856.074 - 6906.486: 54.9567% ( 706) 00:07:54.324 6906.486 - 6956.898: 57.8710% ( 498) 00:07:54.324 6956.898 - 7007.311: 60.7268% ( 488) 00:07:54.324 7007.311 - 7057.723: 63.1496% ( 414) 00:07:54.324 7057.723 - 7108.135: 65.3265% ( 372) 00:07:54.324 7108.135 - 7158.548: 66.8539% ( 261) 00:07:54.324 7158.548 - 7208.960: 68.2467% ( 238) 00:07:54.324 7208.960 - 7259.372: 69.2591% ( 173) 00:07:54.324 7259.372 - 7309.785: 70.2832% ( 175) 00:07:54.324 7309.785 - 7360.197: 71.2020% ( 157) 00:07:54.324 7360.197 - 7410.609: 72.2144% ( 173) 00:07:54.324 7410.609 - 7461.022: 73.0103% ( 136) 00:07:54.324 7461.022 - 7511.434: 73.9291% ( 157) 00:07:54.324 7511.434 - 7561.846: 74.4382% ( 87) 00:07:54.324 7561.846 - 7612.258: 75.0000% ( 96) 00:07:54.324 7612.258 - 7662.671: 75.5501% ( 94) 00:07:54.324 7662.671 - 7713.083: 76.0943% ( 93) 00:07:54.324 7713.083 - 7763.495: 76.4572% ( 62) 00:07:54.324 7763.495 - 7813.908: 76.9546% ( 85) 00:07:54.324 7813.908 - 7864.320: 77.3759% ( 72) 00:07:54.324 7864.320 - 7914.732: 78.0431% ( 114) 00:07:54.324 7914.732 - 7965.145: 78.7336% ( 118) 00:07:54.324 7965.145 - 8015.557: 79.4300% ( 119) 00:07:54.324 8015.557 - 8065.969: 80.0913% ( 113) 00:07:54.324 8065.969 - 8116.382: 80.6297% ( 92) 00:07:54.324 8116.382 - 8166.794: 81.3085% ( 116) 00:07:54.324 8166.794 - 8217.206: 82.3794% ( 183) 00:07:54.324 8217.206 - 8267.618: 83.3450% ( 165) 00:07:54.324 8267.618 - 8318.031: 84.4394% ( 187) 00:07:54.324 8318.031 - 8368.443: 85.5337% ( 187) 00:07:54.324 8368.443 - 8418.855: 86.4115% ( 150) 00:07:54.324 8418.855 - 8469.268: 87.3010% ( 152) 00:07:54.324 8469.268 - 8519.680: 87.7399% ( 75) 00:07:54.324 8519.680 - 8570.092: 88.0911% ( 60) 00:07:54.324 8570.092 - 8620.505: 88.4071% ( 54) 00:07:54.324 8620.505 - 8670.917: 88.6294% ( 38) 00:07:54.324 8670.917 - 8721.329: 88.7816% ( 26) 00:07:54.324 8721.329 - 8771.742: 88.9396% ( 27) 00:07:54.324 8771.742 - 8822.154: 89.0566% ( 20) 00:07:54.324 8822.154 - 8872.566: 89.3141% ( 44) 00:07:54.324 8872.566 - 8922.978: 89.4780% ( 28) 00:07:54.324 8922.978 - 8973.391: 89.5950% ( 20) 00:07:54.324 8973.391 - 9023.803: 89.7179% ( 21) 00:07:54.324 9023.803 - 9074.215: 89.8525% ( 23) 00:07:54.324 9074.215 - 9124.628: 90.0983% ( 42) 00:07:54.324 9124.628 - 9175.040: 90.2271% ( 22) 00:07:54.324 9175.040 - 9225.452: 90.5548% ( 56) 00:07:54.324 9225.452 - 9275.865: 90.6777% ( 21) 00:07:54.324 9275.865 - 9326.277: 90.8591% ( 31) 00:07:54.324 9326.277 - 9376.689: 90.9703% ( 19) 00:07:54.324 9376.689 - 9427.102: 91.0581% ( 15) 00:07:54.324 9427.102 - 9477.514: 91.1283% ( 12) 00:07:54.324 9477.514 - 9527.926: 91.1809% ( 9) 00:07:54.324 9527.926 - 9578.338: 91.2102% ( 5) 00:07:54.324 9578.338 - 9628.751: 91.2629% ( 9) 00:07:54.324 9628.751 - 9679.163: 91.3331% ( 12) 00:07:54.324 9679.163 - 9729.575: 91.4384% ( 18) 00:07:54.324 9729.575 - 9779.988: 91.5262% ( 15) 00:07:54.324 9779.988 - 9830.400: 91.6257% ( 17) 00:07:54.324 9830.400 - 9880.812: 91.7369% ( 19) 00:07:54.324 9880.812 - 9931.225: 91.8832% ( 25) 00:07:54.324 9931.225 - 9981.637: 91.9827% ( 17) 00:07:54.324 9981.637 - 10032.049: 92.0529% ( 12) 00:07:54.324 10032.049 - 10082.462: 92.1056% ( 9) 00:07:54.324 10082.462 - 10132.874: 92.1641% ( 10) 00:07:54.324 10132.874 - 10183.286: 92.2285% ( 11) 00:07:54.324 10183.286 - 10233.698: 92.2987% ( 12) 00:07:54.324 10233.698 - 10284.111: 92.3631% ( 11) 00:07:54.324 10284.111 - 10334.523: 92.4450% ( 14) 00:07:54.324 10334.523 - 10384.935: 92.5737% ( 22) 00:07:54.324 10384.935 - 10435.348: 92.6908% ( 20) 00:07:54.324 10435.348 - 10485.760: 92.8195% ( 22) 00:07:54.324 10485.760 - 10536.172: 92.9541% ( 23) 00:07:54.324 10536.172 - 10586.585: 93.0770% ( 21) 00:07:54.324 10586.585 - 10636.997: 93.1472% ( 12) 00:07:54.324 10636.997 - 10687.409: 93.2175% ( 12) 00:07:54.324 10687.409 - 10737.822: 93.2877% ( 12) 00:07:54.324 10737.822 - 10788.234: 93.3696% ( 14) 00:07:54.324 10788.234 - 10838.646: 93.4515% ( 14) 00:07:54.324 10838.646 - 10889.058: 93.5510% ( 17) 00:07:54.324 10889.058 - 10939.471: 93.6330% ( 14) 00:07:54.324 10939.471 - 10989.883: 93.7324% ( 17) 00:07:54.324 10989.883 - 11040.295: 93.8378% ( 18) 00:07:54.324 11040.295 - 11090.708: 93.9841% ( 25) 00:07:54.324 11090.708 - 11141.120: 94.1304% ( 25) 00:07:54.324 11141.120 - 11191.532: 94.2825% ( 26) 00:07:54.324 11191.532 - 11241.945: 94.4405% ( 27) 00:07:54.324 11241.945 - 11292.357: 94.5166% ( 13) 00:07:54.324 11292.357 - 11342.769: 94.6103% ( 16) 00:07:54.324 11342.769 - 11393.182: 94.6746% ( 11) 00:07:54.324 11393.182 - 11443.594: 94.7331% ( 10) 00:07:54.324 11443.594 - 11494.006: 94.8034% ( 12) 00:07:54.324 11494.006 - 11544.418: 94.9204% ( 20) 00:07:54.324 11544.418 - 11594.831: 95.0199% ( 17) 00:07:54.324 11594.831 - 11645.243: 95.2072% ( 32) 00:07:54.324 11645.243 - 11695.655: 95.3066% ( 17) 00:07:54.324 11695.655 - 11746.068: 95.4178% ( 19) 00:07:54.324 11746.068 - 11796.480: 95.5115% ( 16) 00:07:54.324 11796.480 - 11846.892: 95.5875% ( 13) 00:07:54.324 11846.892 - 11897.305: 95.6812% ( 16) 00:07:54.324 11897.305 - 11947.717: 95.7690% ( 15) 00:07:54.324 11947.717 - 11998.129: 95.8450% ( 13) 00:07:54.324 11998.129 - 12048.542: 95.9094% ( 11) 00:07:54.324 12048.542 - 12098.954: 95.9738% ( 11) 00:07:54.324 12098.954 - 12149.366: 96.0616% ( 15) 00:07:54.324 12149.366 - 12199.778: 96.1610% ( 17) 00:07:54.324 12199.778 - 12250.191: 96.2605% ( 17) 00:07:54.324 12250.191 - 12300.603: 96.3542% ( 16) 00:07:54.324 12300.603 - 12351.015: 96.4419% ( 15) 00:07:54.324 12351.015 - 12401.428: 96.5765% ( 23) 00:07:54.324 12401.428 - 12451.840: 96.7053% ( 22) 00:07:54.324 12451.840 - 12502.252: 96.7697% ( 11) 00:07:54.324 12502.252 - 12552.665: 96.8106% ( 7) 00:07:54.324 12552.665 - 12603.077: 96.8516% ( 7) 00:07:54.324 12603.077 - 12653.489: 96.9101% ( 10) 00:07:54.324 12653.489 - 12703.902: 96.9569% ( 8) 00:07:54.324 12703.902 - 12754.314: 96.9920% ( 6) 00:07:54.324 12754.314 - 12804.726: 97.0389% ( 8) 00:07:54.324 12804.726 - 12855.138: 97.0974% ( 10) 00:07:54.324 12855.138 - 12905.551: 97.1442% ( 8) 00:07:54.324 12905.551 - 13006.375: 97.2671% ( 21) 00:07:54.324 13006.375 - 13107.200: 97.4017% ( 23) 00:07:54.324 13107.200 - 13208.025: 97.4953% ( 16) 00:07:54.324 13208.025 - 13308.849: 97.6007% ( 18) 00:07:54.324 13308.849 - 13409.674: 97.6767% ( 13) 00:07:54.324 13409.674 - 13510.498: 97.7996% ( 21) 00:07:54.324 13510.498 - 13611.323: 97.9108% ( 19) 00:07:54.324 13611.323 - 13712.148: 98.0805% ( 29) 00:07:54.324 13712.148 - 13812.972: 98.1098% ( 5) 00:07:54.324 13812.972 - 13913.797: 98.1215% ( 2) 00:07:54.324 13913.797 - 14014.622: 98.1625% ( 7) 00:07:54.324 14014.622 - 14115.446: 98.2502% ( 15) 00:07:54.324 14115.446 - 14216.271: 98.2678% ( 3) 00:07:54.324 14216.271 - 14317.095: 98.2853% ( 3) 00:07:54.324 14317.095 - 14417.920: 98.3029% ( 3) 00:07:54.324 14417.920 - 14518.745: 98.3263% ( 4) 00:07:54.324 14518.745 - 14619.569: 98.3497% ( 4) 00:07:54.324 14619.569 - 14720.394: 98.4199% ( 12) 00:07:54.324 14720.394 - 14821.218: 98.5253% ( 18) 00:07:54.324 14821.218 - 14922.043: 98.7008% ( 30) 00:07:54.324 14922.043 - 15022.868: 98.8003% ( 17) 00:07:54.324 15022.868 - 15123.692: 98.9642% ( 28) 00:07:54.324 15123.692 - 15224.517: 99.1983% ( 40) 00:07:54.324 15224.517 - 15325.342: 99.2392% ( 7) 00:07:54.324 15325.342 - 15426.166: 99.2509% ( 2) 00:07:54.324 18450.905 - 18551.729: 99.2626% ( 2) 00:07:54.324 18551.729 - 18652.554: 99.2860% ( 4) 00:07:54.324 18652.554 - 18753.378: 99.3153% ( 5) 00:07:54.324 18753.378 - 18854.203: 99.3387% ( 4) 00:07:54.324 18854.203 - 18955.028: 99.3563% ( 3) 00:07:54.324 18955.028 - 19055.852: 99.3855% ( 5) 00:07:54.324 19055.852 - 19156.677: 99.4089% ( 4) 00:07:54.324 19156.677 - 19257.502: 99.4324% ( 4) 00:07:54.324 19257.502 - 19358.326: 99.4558% ( 4) 00:07:54.324 19358.326 - 19459.151: 99.4792% ( 4) 00:07:54.324 19459.151 - 19559.975: 99.5026% ( 4) 00:07:54.324 19559.975 - 19660.800: 99.5318% ( 5) 00:07:54.324 19660.800 - 19761.625: 99.5552% ( 4) 00:07:54.324 19761.625 - 19862.449: 99.5787% ( 4) 00:07:54.324 19862.449 - 19963.274: 99.6021% ( 4) 00:07:54.324 19963.274 - 20064.098: 99.6255% ( 4) 00:07:54.325 23088.837 - 23189.662: 99.6372% ( 2) 00:07:54.325 23189.662 - 23290.486: 99.6664% ( 5) 00:07:54.325 23290.486 - 23391.311: 99.6840% ( 3) 00:07:54.325 23391.311 - 23492.135: 99.7074% ( 4) 00:07:54.325 23492.135 - 23592.960: 99.7308% ( 4) 00:07:54.325 23592.960 - 23693.785: 99.7542% ( 4) 00:07:54.325 23693.785 - 23794.609: 99.7776% ( 4) 00:07:54.325 23794.609 - 23895.434: 99.8010% ( 4) 00:07:54.325 23895.434 - 23996.258: 99.8244% ( 4) 00:07:54.325 23996.258 - 24097.083: 99.8478% ( 4) 00:07:54.325 24097.083 - 24197.908: 99.8713% ( 4) 00:07:54.325 24197.908 - 24298.732: 99.8947% ( 4) 00:07:54.325 24298.732 - 24399.557: 99.9239% ( 5) 00:07:54.325 24399.557 - 24500.382: 99.9473% ( 4) 00:07:54.325 24500.382 - 24601.206: 99.9707% ( 4) 00:07:54.325 24601.206 - 24702.031: 99.9941% ( 4) 00:07:54.325 24702.031 - 24802.855: 100.0000% ( 1) 00:07:54.325 00:07:54.325 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:54.325 ============================================================================== 00:07:54.325 Range in us Cumulative IO count 00:07:54.325 5066.437 - 5091.643: 0.0058% ( 1) 00:07:54.325 5116.849 - 5142.055: 0.0117% ( 1) 00:07:54.325 5142.055 - 5167.262: 0.0175% ( 1) 00:07:54.325 5192.468 - 5217.674: 0.0292% ( 2) 00:07:54.325 5217.674 - 5242.880: 0.0466% ( 3) 00:07:54.325 5242.880 - 5268.086: 0.0700% ( 4) 00:07:54.325 5268.086 - 5293.292: 0.1166% ( 8) 00:07:54.325 5293.292 - 5318.498: 0.1807% ( 11) 00:07:54.325 5318.498 - 5343.705: 0.2624% ( 14) 00:07:54.325 5343.705 - 5368.911: 0.3498% ( 15) 00:07:54.325 5368.911 - 5394.117: 0.5597% ( 36) 00:07:54.325 5394.117 - 5419.323: 0.8570% ( 51) 00:07:54.325 5419.323 - 5444.529: 0.9853% ( 22) 00:07:54.325 5444.529 - 5469.735: 1.3876% ( 69) 00:07:54.325 5469.735 - 5494.942: 1.5217% ( 23) 00:07:54.325 5494.942 - 5520.148: 1.6325% ( 19) 00:07:54.325 5520.148 - 5545.354: 1.8424% ( 36) 00:07:54.325 5545.354 - 5570.560: 2.1688% ( 56) 00:07:54.325 5570.560 - 5595.766: 2.5128% ( 59) 00:07:54.325 5595.766 - 5620.972: 3.0201% ( 87) 00:07:54.325 5620.972 - 5646.178: 3.6264% ( 104) 00:07:54.325 5646.178 - 5671.385: 4.0870% ( 79) 00:07:54.325 5671.385 - 5696.591: 4.2619% ( 30) 00:07:54.325 5696.591 - 5721.797: 4.4018% ( 24) 00:07:54.325 5721.797 - 5747.003: 4.5826% ( 31) 00:07:54.325 5747.003 - 5772.209: 4.8158% ( 40) 00:07:54.325 5772.209 - 5797.415: 4.9557% ( 24) 00:07:54.325 5797.415 - 5822.622: 5.0257% ( 12) 00:07:54.325 5822.622 - 5847.828: 5.0898% ( 11) 00:07:54.325 5847.828 - 5873.034: 5.1539% ( 11) 00:07:54.325 5873.034 - 5898.240: 5.3347% ( 31) 00:07:54.325 5898.240 - 5923.446: 5.5154% ( 31) 00:07:54.325 5923.446 - 5948.652: 5.5970% ( 14) 00:07:54.325 5948.652 - 5973.858: 5.7253% ( 22) 00:07:54.325 5973.858 - 5999.065: 5.9468% ( 38) 00:07:54.325 5999.065 - 6024.271: 6.0751% ( 22) 00:07:54.325 6024.271 - 6049.477: 6.1975% ( 21) 00:07:54.325 6049.477 - 6074.683: 6.6756% ( 82) 00:07:54.325 6074.683 - 6099.889: 6.8505% ( 30) 00:07:54.325 6099.889 - 6125.095: 7.2470% ( 68) 00:07:54.325 6125.095 - 6150.302: 7.7484% ( 86) 00:07:54.325 6150.302 - 6175.508: 8.0049% ( 44) 00:07:54.325 6175.508 - 6200.714: 8.3722% ( 63) 00:07:54.325 6200.714 - 6225.920: 9.5382% ( 200) 00:07:54.325 6225.920 - 6251.126: 10.4478% ( 156) 00:07:54.325 6251.126 - 6276.332: 11.0366% ( 101) 00:07:54.325 6276.332 - 6301.538: 11.5089% ( 81) 00:07:54.325 6301.538 - 6326.745: 12.0569% ( 94) 00:07:54.325 6326.745 - 6351.951: 12.9897% ( 160) 00:07:54.325 6351.951 - 6377.157: 13.8876% ( 154) 00:07:54.325 6377.157 - 6402.363: 15.2868% ( 240) 00:07:54.325 6402.363 - 6427.569: 16.4004% ( 191) 00:07:54.325 6427.569 - 6452.775: 18.1553% ( 301) 00:07:54.325 6452.775 - 6503.188: 20.5749% ( 415) 00:07:54.325 6503.188 - 6553.600: 23.6066% ( 520) 00:07:54.325 6553.600 - 6604.012: 27.5886% ( 683) 00:07:54.325 6604.012 - 6654.425: 31.3200% ( 640) 00:07:54.325 6654.425 - 6704.837: 35.7334% ( 757) 00:07:54.325 6704.837 - 6755.249: 39.8496% ( 706) 00:07:54.325 6755.249 - 6805.662: 44.7062% ( 833) 00:07:54.325 6805.662 - 6856.074: 49.5219% ( 826) 00:07:54.325 6856.074 - 6906.486: 54.0229% ( 772) 00:07:54.325 6906.486 - 6956.898: 57.8242% ( 652) 00:07:54.325 6956.898 - 7007.311: 60.9783% ( 541) 00:07:54.325 7007.311 - 7057.723: 63.6544% ( 459) 00:07:54.325 7057.723 - 7108.135: 65.3976% ( 299) 00:07:54.325 7108.135 - 7158.548: 66.9951% ( 274) 00:07:54.325 7158.548 - 7208.960: 68.8141% ( 312) 00:07:54.325 7208.960 - 7259.372: 69.9627% ( 197) 00:07:54.325 7259.372 - 7309.785: 71.0529% ( 187) 00:07:54.325 7309.785 - 7360.197: 71.8517% ( 137) 00:07:54.325 7360.197 - 7410.609: 72.6737% ( 141) 00:07:54.325 7410.609 - 7461.022: 73.3675% ( 119) 00:07:54.325 7461.022 - 7511.434: 74.0030% ( 109) 00:07:54.325 7511.434 - 7561.846: 74.6385% ( 109) 00:07:54.325 7561.846 - 7612.258: 75.4139% ( 133) 00:07:54.325 7612.258 - 7662.671: 75.7871% ( 64) 00:07:54.325 7662.671 - 7713.083: 76.2477% ( 79) 00:07:54.325 7713.083 - 7763.495: 76.7083% ( 79) 00:07:54.325 7763.495 - 7813.908: 77.0173% ( 53) 00:07:54.325 7813.908 - 7864.320: 77.5245% ( 87) 00:07:54.325 7864.320 - 7914.732: 78.2941% ( 132) 00:07:54.325 7914.732 - 7965.145: 78.7313% ( 75) 00:07:54.325 7965.145 - 8015.557: 79.2619% ( 91) 00:07:54.325 8015.557 - 8065.969: 79.9790% ( 123) 00:07:54.325 8065.969 - 8116.382: 80.6903% ( 122) 00:07:54.325 8116.382 - 8166.794: 81.3316% ( 110) 00:07:54.325 8166.794 - 8217.206: 81.9496% ( 106) 00:07:54.325 8217.206 - 8267.618: 82.9291% ( 168) 00:07:54.325 8267.618 - 8318.031: 83.7220% ( 136) 00:07:54.325 8318.031 - 8368.443: 84.5208% ( 137) 00:07:54.325 8368.443 - 8418.855: 85.4711% ( 163) 00:07:54.325 8418.855 - 8469.268: 86.1590% ( 118) 00:07:54.325 8469.268 - 8519.680: 86.7945% ( 109) 00:07:54.325 8519.680 - 8570.092: 87.1502% ( 61) 00:07:54.325 8570.092 - 8620.505: 87.3776% ( 39) 00:07:54.325 8620.505 - 8670.917: 87.5875% ( 36) 00:07:54.325 8670.917 - 8721.329: 87.7624% ( 30) 00:07:54.325 8721.329 - 8771.742: 87.9956% ( 40) 00:07:54.325 8771.742 - 8822.154: 88.1763% ( 31) 00:07:54.325 8822.154 - 8872.566: 88.3862% ( 36) 00:07:54.325 8872.566 - 8922.978: 88.5844% ( 34) 00:07:54.325 8922.978 - 8973.391: 88.8176% ( 40) 00:07:54.325 8973.391 - 9023.803: 89.0100% ( 33) 00:07:54.325 9023.803 - 9074.215: 89.1908% ( 31) 00:07:54.325 9074.215 - 9124.628: 89.3365% ( 25) 00:07:54.325 9124.628 - 9175.040: 89.4531% ( 20) 00:07:54.325 9175.040 - 9225.452: 89.5989% ( 25) 00:07:54.325 9225.452 - 9275.865: 89.9487% ( 60) 00:07:54.325 9275.865 - 9326.277: 90.1236% ( 30) 00:07:54.325 9326.277 - 9376.689: 90.4734% ( 60) 00:07:54.325 9376.689 - 9427.102: 90.9107% ( 75) 00:07:54.325 9427.102 - 9477.514: 91.1439% ( 40) 00:07:54.325 9477.514 - 9527.926: 91.4004% ( 44) 00:07:54.325 9527.926 - 9578.338: 91.5637% ( 28) 00:07:54.325 9578.338 - 9628.751: 91.7327% ( 29) 00:07:54.325 9628.751 - 9679.163: 91.8902% ( 27) 00:07:54.325 9679.163 - 9729.575: 92.0243% ( 23) 00:07:54.325 9729.575 - 9779.988: 92.1642% ( 24) 00:07:54.325 9779.988 - 9830.400: 92.2866% ( 21) 00:07:54.325 9830.400 - 9880.812: 92.4382% ( 26) 00:07:54.325 9880.812 - 9931.225: 92.5840% ( 25) 00:07:54.325 9931.225 - 9981.637: 92.8055% ( 38) 00:07:54.325 9981.637 - 10032.049: 92.9454% ( 24) 00:07:54.325 10032.049 - 10082.462: 93.0795% ( 23) 00:07:54.325 10082.462 - 10132.874: 93.1903% ( 19) 00:07:54.325 10132.874 - 10183.286: 93.3127% ( 21) 00:07:54.325 10183.286 - 10233.698: 93.4002% ( 15) 00:07:54.325 10233.698 - 10284.111: 93.4818% ( 14) 00:07:54.325 10284.111 - 10334.523: 93.5634% ( 14) 00:07:54.325 10334.523 - 10384.935: 93.6159% ( 9) 00:07:54.325 10384.935 - 10435.348: 93.6742% ( 10) 00:07:54.325 10435.348 - 10485.760: 93.7325% ( 10) 00:07:54.325 10485.760 - 10536.172: 93.7850% ( 9) 00:07:54.325 10536.172 - 10586.585: 93.8200% ( 6) 00:07:54.325 10586.585 - 10636.997: 93.8608% ( 7) 00:07:54.325 10636.997 - 10687.409: 93.8958% ( 6) 00:07:54.325 10687.409 - 10737.822: 93.9307% ( 6) 00:07:54.325 10737.822 - 10788.234: 93.9599% ( 5) 00:07:54.325 10788.234 - 10838.646: 93.9715% ( 2) 00:07:54.325 10838.646 - 10889.058: 93.9832% ( 2) 00:07:54.325 10889.058 - 10939.471: 93.9949% ( 2) 00:07:54.325 10939.471 - 10989.883: 94.0065% ( 2) 00:07:54.325 10989.883 - 11040.295: 94.0182% ( 2) 00:07:54.325 11040.295 - 11090.708: 94.0823% ( 11) 00:07:54.325 11090.708 - 11141.120: 94.1639% ( 14) 00:07:54.325 11141.120 - 11191.532: 94.2514% ( 15) 00:07:54.325 11191.532 - 11241.945: 94.3155% ( 11) 00:07:54.325 11241.945 - 11292.357: 94.4030% ( 15) 00:07:54.325 11292.357 - 11342.769: 94.4788% ( 13) 00:07:54.325 11342.769 - 11393.182: 94.6012% ( 21) 00:07:54.325 11393.182 - 11443.594: 94.7411% ( 24) 00:07:54.325 11443.594 - 11494.006: 94.8461% ( 18) 00:07:54.325 11494.006 - 11544.418: 94.9335% ( 15) 00:07:54.325 11544.418 - 11594.831: 95.0326% ( 17) 00:07:54.325 11594.831 - 11645.243: 95.1259% ( 16) 00:07:54.325 11645.243 - 11695.655: 95.2600% ( 23) 00:07:54.325 11695.655 - 11746.068: 95.3766% ( 20) 00:07:54.325 11746.068 - 11796.480: 95.5107% ( 23) 00:07:54.325 11796.480 - 11846.892: 95.7731% ( 45) 00:07:54.325 11846.892 - 11897.305: 95.8489% ( 13) 00:07:54.325 11897.305 - 11947.717: 95.9130% ( 11) 00:07:54.325 11947.717 - 11998.129: 95.9771% ( 11) 00:07:54.325 11998.129 - 12048.542: 96.0063% ( 5) 00:07:54.325 12048.542 - 12098.954: 96.0529% ( 8) 00:07:54.325 12098.954 - 12149.366: 96.0996% ( 8) 00:07:54.325 12149.366 - 12199.778: 96.1812% ( 14) 00:07:54.325 12199.778 - 12250.191: 96.2512% ( 12) 00:07:54.325 12250.191 - 12300.603: 96.3328% ( 14) 00:07:54.325 12300.603 - 12351.015: 96.3969% ( 11) 00:07:54.326 12351.015 - 12401.428: 96.4611% ( 11) 00:07:54.326 12401.428 - 12451.840: 96.4960% ( 6) 00:07:54.326 12451.840 - 12502.252: 96.5427% ( 8) 00:07:54.326 12502.252 - 12552.665: 96.5835% ( 7) 00:07:54.326 12552.665 - 12603.077: 96.6418% ( 10) 00:07:54.326 12603.077 - 12653.489: 96.6826% ( 7) 00:07:54.326 12653.489 - 12703.902: 96.7409% ( 10) 00:07:54.326 12703.902 - 12754.314: 96.7934% ( 9) 00:07:54.326 12754.314 - 12804.726: 96.8400% ( 8) 00:07:54.326 12804.726 - 12855.138: 96.8808% ( 7) 00:07:54.326 12855.138 - 12905.551: 96.9333% ( 9) 00:07:54.326 12905.551 - 13006.375: 96.9916% ( 10) 00:07:54.326 13006.375 - 13107.200: 97.0149% ( 4) 00:07:54.326 13107.200 - 13208.025: 97.0382% ( 4) 00:07:54.326 13208.025 - 13308.849: 97.0791% ( 7) 00:07:54.326 13308.849 - 13409.674: 97.2598% ( 31) 00:07:54.326 13409.674 - 13510.498: 97.4289% ( 29) 00:07:54.326 13510.498 - 13611.323: 97.9361% ( 87) 00:07:54.326 13611.323 - 13712.148: 98.1518% ( 37) 00:07:54.326 13712.148 - 13812.972: 98.2684% ( 20) 00:07:54.326 13812.972 - 13913.797: 98.3326% ( 11) 00:07:54.326 13913.797 - 14014.622: 98.4025% ( 12) 00:07:54.326 14014.622 - 14115.446: 98.4783% ( 13) 00:07:54.326 14115.446 - 14216.271: 98.5833% ( 18) 00:07:54.326 14216.271 - 14317.095: 98.7115% ( 22) 00:07:54.326 14317.095 - 14417.920: 98.8806% ( 29) 00:07:54.326 14417.920 - 14518.745: 99.0264% ( 25) 00:07:54.326 14518.745 - 14619.569: 99.1021% ( 13) 00:07:54.326 14619.569 - 14720.394: 99.1488% ( 8) 00:07:54.326 14720.394 - 14821.218: 99.1779% ( 5) 00:07:54.326 14821.218 - 14922.043: 99.2537% ( 13) 00:07:54.326 14922.043 - 15022.868: 99.4928% ( 41) 00:07:54.326 15022.868 - 15123.692: 99.5452% ( 9) 00:07:54.326 15123.692 - 15224.517: 99.6035% ( 10) 00:07:54.326 15224.517 - 15325.342: 99.6269% ( 4) 00:07:54.326 17845.957 - 17946.782: 99.6385% ( 2) 00:07:54.326 17946.782 - 18047.606: 99.6618% ( 4) 00:07:54.326 18047.606 - 18148.431: 99.6852% ( 4) 00:07:54.326 18148.431 - 18249.255: 99.7085% ( 4) 00:07:54.326 18249.255 - 18350.080: 99.7318% ( 4) 00:07:54.326 18350.080 - 18450.905: 99.7551% ( 4) 00:07:54.326 18450.905 - 18551.729: 99.7785% ( 4) 00:07:54.326 18551.729 - 18652.554: 99.8018% ( 4) 00:07:54.326 18652.554 - 18753.378: 99.8251% ( 4) 00:07:54.326 18753.378 - 18854.203: 99.8542% ( 5) 00:07:54.326 18854.203 - 18955.028: 99.8776% ( 4) 00:07:54.326 18955.028 - 19055.852: 99.9009% ( 4) 00:07:54.326 19055.852 - 19156.677: 99.9242% ( 4) 00:07:54.326 19156.677 - 19257.502: 99.9475% ( 4) 00:07:54.326 19257.502 - 19358.326: 99.9767% ( 5) 00:07:54.326 19358.326 - 19459.151: 100.0000% ( 4) 00:07:54.326 00:07:54.326 19:09:38 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:54.326 00:07:54.326 real 0m2.516s 00:07:54.326 user 0m2.219s 00:07:54.326 sys 0m0.193s 00:07:54.326 19:09:38 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.326 19:09:38 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:54.326 ************************************ 00:07:54.326 END TEST nvme_perf 00:07:54.326 ************************************ 00:07:54.326 19:09:38 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:54.326 19:09:38 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:54.326 19:09:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.326 19:09:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.326 ************************************ 00:07:54.326 START TEST nvme_hello_world 00:07:54.326 ************************************ 00:07:54.326 19:09:38 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:54.326 Initializing NVMe Controllers 00:07:54.326 Attached to 0000:00:10.0 00:07:54.326 Namespace ID: 1 size: 6GB 00:07:54.326 Attached to 0000:00:11.0 00:07:54.326 Namespace ID: 1 size: 5GB 00:07:54.326 Attached to 0000:00:13.0 00:07:54.326 Namespace ID: 1 size: 1GB 00:07:54.326 Attached to 0000:00:12.0 00:07:54.326 Namespace ID: 1 size: 4GB 00:07:54.326 Namespace ID: 2 size: 4GB 00:07:54.326 Namespace ID: 3 size: 4GB 00:07:54.326 Initialization complete. 00:07:54.326 INFO: using host memory buffer for IO 00:07:54.326 Hello world! 00:07:54.326 INFO: using host memory buffer for IO 00:07:54.326 Hello world! 00:07:54.326 INFO: using host memory buffer for IO 00:07:54.326 Hello world! 00:07:54.326 INFO: using host memory buffer for IO 00:07:54.326 Hello world! 00:07:54.326 INFO: using host memory buffer for IO 00:07:54.326 Hello world! 00:07:54.326 INFO: using host memory buffer for IO 00:07:54.326 Hello world! 00:07:54.326 00:07:54.326 real 0m0.238s 00:07:54.326 user 0m0.085s 00:07:54.326 sys 0m0.103s 00:07:54.326 ************************************ 00:07:54.326 END TEST nvme_hello_world 00:07:54.326 ************************************ 00:07:54.326 19:09:38 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.326 19:09:38 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:54.326 19:09:38 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:54.326 19:09:38 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.326 19:09:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.326 19:09:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.326 ************************************ 00:07:54.326 START TEST nvme_sgl 00:07:54.326 ************************************ 00:07:54.326 19:09:38 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:54.585 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:54.585 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:54.585 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:54.585 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:54.585 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:54.585 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:54.585 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:54.585 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:54.585 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:54.585 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:54.585 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:54.585 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:54.585 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:54.585 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:54.585 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:54.585 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:54.585 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:54.585 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:54.585 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:54.585 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:54.585 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:54.585 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:54.585 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:54.585 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:54.585 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:54.585 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:54.585 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:54.585 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:54.585 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:54.585 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:54.585 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:54.586 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:54.586 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:54.586 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:54.586 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:54.586 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:54.586 NVMe Readv/Writev Request test 00:07:54.586 Attached to 0000:00:10.0 00:07:54.586 Attached to 0000:00:11.0 00:07:54.586 Attached to 0000:00:13.0 00:07:54.586 Attached to 0000:00:12.0 00:07:54.586 0000:00:10.0: build_io_request_2 test passed 00:07:54.586 0000:00:10.0: build_io_request_4 test passed 00:07:54.586 0000:00:10.0: build_io_request_5 test passed 00:07:54.586 0000:00:10.0: build_io_request_6 test passed 00:07:54.586 0000:00:10.0: build_io_request_7 test passed 00:07:54.586 0000:00:10.0: build_io_request_10 test passed 00:07:54.586 0000:00:11.0: build_io_request_2 test passed 00:07:54.586 0000:00:11.0: build_io_request_4 test passed 00:07:54.586 0000:00:11.0: build_io_request_5 test passed 00:07:54.586 0000:00:11.0: build_io_request_6 test passed 00:07:54.586 0000:00:11.0: build_io_request_7 test passed 00:07:54.586 0000:00:11.0: build_io_request_10 test passed 00:07:54.586 Cleaning up... 00:07:54.586 ************************************ 00:07:54.586 END TEST nvme_sgl 00:07:54.586 ************************************ 00:07:54.586 00:07:54.586 real 0m0.280s 00:07:54.586 user 0m0.140s 00:07:54.586 sys 0m0.094s 00:07:54.586 19:09:38 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.586 19:09:38 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:54.586 19:09:38 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:54.586 19:09:38 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.586 19:09:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.586 19:09:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.586 ************************************ 00:07:54.586 START TEST nvme_e2edp 00:07:54.586 ************************************ 00:07:54.586 19:09:38 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:54.844 NVMe Write/Read with End-to-End data protection test 00:07:54.844 Attached to 0000:00:10.0 00:07:54.844 Attached to 0000:00:11.0 00:07:54.844 Attached to 0000:00:13.0 00:07:54.844 Attached to 0000:00:12.0 00:07:54.844 Cleaning up... 00:07:54.844 00:07:54.844 real 0m0.225s 00:07:54.844 user 0m0.062s 00:07:54.844 sys 0m0.118s 00:07:54.844 19:09:39 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.844 ************************************ 00:07:54.844 END TEST nvme_e2edp 00:07:54.844 ************************************ 00:07:54.844 19:09:39 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:54.844 19:09:39 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:54.844 19:09:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.844 19:09:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.844 19:09:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.844 ************************************ 00:07:54.844 START TEST nvme_reserve 00:07:54.844 ************************************ 00:07:54.844 19:09:39 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:55.102 ===================================================== 00:07:55.102 NVMe Controller at PCI bus 0, device 16, function 0 00:07:55.102 ===================================================== 00:07:55.102 Reservations: Not Supported 00:07:55.102 ===================================================== 00:07:55.102 NVMe Controller at PCI bus 0, device 17, function 0 00:07:55.102 ===================================================== 00:07:55.102 Reservations: Not Supported 00:07:55.102 ===================================================== 00:07:55.102 NVMe Controller at PCI bus 0, device 19, function 0 00:07:55.102 ===================================================== 00:07:55.102 Reservations: Not Supported 00:07:55.102 ===================================================== 00:07:55.102 NVMe Controller at PCI bus 0, device 18, function 0 00:07:55.102 ===================================================== 00:07:55.102 Reservations: Not Supported 00:07:55.102 Reservation test passed 00:07:55.102 00:07:55.102 real 0m0.209s 00:07:55.102 user 0m0.072s 00:07:55.102 sys 0m0.094s 00:07:55.102 ************************************ 00:07:55.102 END TEST nvme_reserve 00:07:55.102 ************************************ 00:07:55.102 19:09:39 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.102 19:09:39 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:55.102 19:09:39 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:55.102 19:09:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.102 19:09:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.102 19:09:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.102 ************************************ 00:07:55.102 START TEST nvme_err_injection 00:07:55.102 ************************************ 00:07:55.103 19:09:39 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:55.360 NVMe Error Injection test 00:07:55.360 Attached to 0000:00:10.0 00:07:55.360 Attached to 0000:00:11.0 00:07:55.360 Attached to 0000:00:13.0 00:07:55.360 Attached to 0000:00:12.0 00:07:55.360 0000:00:13.0: get features failed as expected 00:07:55.360 0000:00:12.0: get features failed as expected 00:07:55.360 0000:00:10.0: get features failed as expected 00:07:55.360 0000:00:11.0: get features failed as expected 00:07:55.360 0000:00:10.0: get features successfully as expected 00:07:55.360 0000:00:11.0: get features successfully as expected 00:07:55.360 0000:00:13.0: get features successfully as expected 00:07:55.360 0000:00:12.0: get features successfully as expected 00:07:55.360 0000:00:10.0: read failed as expected 00:07:55.360 0000:00:11.0: read failed as expected 00:07:55.360 0000:00:13.0: read failed as expected 00:07:55.360 0000:00:12.0: read failed as expected 00:07:55.360 0000:00:10.0: read successfully as expected 00:07:55.360 0000:00:11.0: read successfully as expected 00:07:55.360 0000:00:13.0: read successfully as expected 00:07:55.360 0000:00:12.0: read successfully as expected 00:07:55.360 Cleaning up... 00:07:55.360 ************************************ 00:07:55.360 END TEST nvme_err_injection 00:07:55.360 ************************************ 00:07:55.360 00:07:55.360 real 0m0.235s 00:07:55.360 user 0m0.085s 00:07:55.360 sys 0m0.099s 00:07:55.360 19:09:39 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.360 19:09:39 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:55.360 19:09:39 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:55.361 19:09:39 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:55.361 19:09:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.361 19:09:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.619 ************************************ 00:07:55.619 START TEST nvme_overhead 00:07:55.619 ************************************ 00:07:55.619 19:09:39 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:56.994 Initializing NVMe Controllers 00:07:56.994 Attached to 0000:00:10.0 00:07:56.994 Attached to 0000:00:11.0 00:07:56.994 Attached to 0000:00:13.0 00:07:56.994 Attached to 0000:00:12.0 00:07:56.994 Initialization complete. Launching workers. 00:07:56.994 submit (in ns) avg, min, max = 12263.9, 11054.6, 415370.8 00:07:56.994 complete (in ns) avg, min, max = 8212.6, 7798.5, 183249.2 00:07:56.994 00:07:56.994 Submit histogram 00:07:56.994 ================ 00:07:56.994 Range in us Cumulative Count 00:07:56.994 11.028 - 11.077: 0.0060% ( 1) 00:07:56.994 11.077 - 11.126: 0.0120% ( 1) 00:07:56.994 11.520 - 11.569: 0.0241% ( 2) 00:07:56.994 11.569 - 11.618: 0.0481% ( 4) 00:07:56.994 11.618 - 11.668: 0.1083% ( 10) 00:07:56.994 11.668 - 11.717: 0.3549% ( 41) 00:07:56.994 11.717 - 11.766: 0.9864% ( 105) 00:07:56.994 11.766 - 11.815: 2.8570% ( 311) 00:07:56.994 11.815 - 11.865: 7.1575% ( 715) 00:07:56.994 11.865 - 11.914: 14.8562% ( 1280) 00:07:56.994 11.914 - 11.963: 24.3594% ( 1580) 00:07:56.994 11.963 - 12.012: 35.4084% ( 1837) 00:07:56.994 12.012 - 12.062: 46.6558% ( 1870) 00:07:56.994 12.062 - 12.111: 57.3559% ( 1779) 00:07:56.994 12.111 - 12.160: 66.4080% ( 1505) 00:07:56.994 12.160 - 12.209: 73.5956% ( 1195) 00:07:56.994 12.209 - 12.258: 79.4659% ( 976) 00:07:56.994 12.258 - 12.308: 84.0190% ( 757) 00:07:56.994 12.308 - 12.357: 87.6579% ( 605) 00:07:56.994 12.357 - 12.406: 90.5209% ( 476) 00:07:56.994 12.406 - 12.455: 92.5538% ( 338) 00:07:56.994 12.455 - 12.505: 94.0094% ( 242) 00:07:56.994 12.505 - 12.554: 95.0018% ( 165) 00:07:56.994 12.554 - 12.603: 95.7356% ( 122) 00:07:56.994 12.603 - 12.702: 96.4513% ( 119) 00:07:56.994 12.702 - 12.800: 96.7882% ( 56) 00:07:56.994 12.800 - 12.898: 96.9566% ( 28) 00:07:56.994 12.898 - 12.997: 97.0588% ( 17) 00:07:56.994 12.997 - 13.095: 97.1250% ( 11) 00:07:56.994 13.095 - 13.194: 97.1731% ( 8) 00:07:56.994 13.194 - 13.292: 97.2152% ( 7) 00:07:56.994 13.292 - 13.391: 97.2272% ( 2) 00:07:56.994 13.391 - 13.489: 97.2453% ( 3) 00:07:56.994 13.489 - 13.588: 97.2693% ( 4) 00:07:56.994 13.588 - 13.686: 97.2874% ( 3) 00:07:56.994 13.686 - 13.785: 97.3054% ( 3) 00:07:56.994 13.883 - 13.982: 97.3295% ( 4) 00:07:56.994 13.982 - 14.080: 97.4077% ( 13) 00:07:56.994 14.080 - 14.178: 97.4498% ( 7) 00:07:56.994 14.178 - 14.277: 97.5280% ( 13) 00:07:56.994 14.277 - 14.375: 97.6543% ( 21) 00:07:56.994 14.375 - 14.474: 97.7686% ( 19) 00:07:56.994 14.474 - 14.572: 97.8467% ( 13) 00:07:56.994 14.572 - 14.671: 97.9490% ( 17) 00:07:56.994 14.671 - 14.769: 98.0272% ( 13) 00:07:56.994 14.769 - 14.868: 98.1234% ( 16) 00:07:56.994 14.868 - 14.966: 98.1715% ( 8) 00:07:56.994 14.966 - 15.065: 98.2136% ( 7) 00:07:56.994 15.065 - 15.163: 98.2497% ( 6) 00:07:56.994 15.163 - 15.262: 98.2918% ( 7) 00:07:56.994 15.262 - 15.360: 98.3279% ( 6) 00:07:56.994 15.360 - 15.458: 98.3580% ( 5) 00:07:56.994 15.458 - 15.557: 98.3640% ( 1) 00:07:56.994 15.557 - 15.655: 98.3700% ( 1) 00:07:56.994 15.655 - 15.754: 98.3821% ( 2) 00:07:56.994 15.754 - 15.852: 98.4001% ( 3) 00:07:56.994 15.852 - 15.951: 98.4121% ( 2) 00:07:56.994 15.951 - 16.049: 98.4181% ( 1) 00:07:56.994 16.049 - 16.148: 98.4422% ( 4) 00:07:56.994 16.148 - 16.246: 98.4663% ( 4) 00:07:56.994 16.345 - 16.443: 98.5023% ( 6) 00:07:56.994 16.443 - 16.542: 98.5204% ( 3) 00:07:56.994 16.542 - 16.640: 98.5505% ( 5) 00:07:56.994 16.640 - 16.738: 98.5685% ( 3) 00:07:56.994 16.738 - 16.837: 98.5866% ( 3) 00:07:56.994 16.837 - 16.935: 98.6046% ( 3) 00:07:56.994 16.935 - 17.034: 98.6106% ( 1) 00:07:56.994 17.034 - 17.132: 98.6407% ( 5) 00:07:56.994 17.132 - 17.231: 98.6467% ( 1) 00:07:56.994 17.231 - 17.329: 98.6527% ( 1) 00:07:56.994 17.329 - 17.428: 98.6587% ( 1) 00:07:56.994 17.428 - 17.526: 98.6948% ( 6) 00:07:56.994 17.526 - 17.625: 98.7429% ( 8) 00:07:56.994 17.625 - 17.723: 98.8452% ( 17) 00:07:56.994 17.723 - 17.822: 98.9775% ( 22) 00:07:56.994 17.822 - 17.920: 99.0557% ( 13) 00:07:56.994 17.920 - 18.018: 99.1519% ( 16) 00:07:56.994 18.018 - 18.117: 99.2061% ( 9) 00:07:56.994 18.117 - 18.215: 99.2542% ( 8) 00:07:56.994 18.215 - 18.314: 99.3384% ( 14) 00:07:56.994 18.314 - 18.412: 99.4106% ( 12) 00:07:56.994 18.412 - 18.511: 99.4827% ( 12) 00:07:56.994 18.511 - 18.609: 99.5369% ( 9) 00:07:56.994 18.609 - 18.708: 99.5669% ( 5) 00:07:56.994 18.708 - 18.806: 99.6151% ( 8) 00:07:56.994 18.806 - 18.905: 99.6331% ( 3) 00:07:56.994 18.905 - 19.003: 99.6752% ( 7) 00:07:56.994 19.003 - 19.102: 99.6872% ( 2) 00:07:56.994 19.102 - 19.200: 99.7053% ( 3) 00:07:56.994 19.200 - 19.298: 99.7293% ( 4) 00:07:56.994 19.298 - 19.397: 99.7354% ( 1) 00:07:56.994 19.397 - 19.495: 99.7414% ( 1) 00:07:56.994 19.495 - 19.594: 99.7474% ( 1) 00:07:56.994 19.594 - 19.692: 99.7594% ( 2) 00:07:56.994 19.692 - 19.791: 99.7714% ( 2) 00:07:56.994 19.889 - 19.988: 99.7835% ( 2) 00:07:56.994 19.988 - 20.086: 99.7955% ( 2) 00:07:56.994 20.185 - 20.283: 99.8196% ( 4) 00:07:56.994 20.480 - 20.578: 99.8256% ( 1) 00:07:56.994 20.775 - 20.874: 99.8316% ( 1) 00:07:56.994 21.169 - 21.268: 99.8436% ( 2) 00:07:56.994 21.268 - 21.366: 99.8496% ( 1) 00:07:56.994 21.366 - 21.465: 99.8556% ( 1) 00:07:56.994 21.760 - 21.858: 99.8617% ( 1) 00:07:56.994 21.858 - 21.957: 99.8677% ( 1) 00:07:56.994 21.957 - 22.055: 99.8737% ( 1) 00:07:56.994 22.154 - 22.252: 99.8797% ( 1) 00:07:56.994 22.252 - 22.351: 99.8857% ( 1) 00:07:56.994 22.646 - 22.745: 99.8917% ( 1) 00:07:56.994 22.843 - 22.942: 99.8978% ( 1) 00:07:56.994 23.040 - 23.138: 99.9038% ( 1) 00:07:56.994 23.138 - 23.237: 99.9098% ( 1) 00:07:56.994 23.434 - 23.532: 99.9158% ( 1) 00:07:56.994 23.729 - 23.828: 99.9218% ( 1) 00:07:56.994 23.828 - 23.926: 99.9278% ( 1) 00:07:56.994 24.517 - 24.615: 99.9338% ( 1) 00:07:56.994 25.403 - 25.600: 99.9399% ( 1) 00:07:56.994 25.797 - 25.994: 99.9459% ( 1) 00:07:56.994 27.372 - 27.569: 99.9519% ( 1) 00:07:56.994 28.751 - 28.948: 99.9579% ( 1) 00:07:56.994 30.523 - 30.720: 99.9639% ( 1) 00:07:56.994 30.720 - 30.917: 99.9699% ( 1) 00:07:56.994 31.114 - 31.311: 99.9759% ( 1) 00:07:56.995 54.745 - 55.138: 99.9820% ( 1) 00:07:56.995 62.622 - 63.015: 99.9880% ( 1) 00:07:56.995 88.615 - 89.009: 99.9940% ( 1) 00:07:56.995 412.751 - 415.902: 100.0000% ( 1) 00:07:56.995 00:07:56.995 Complete histogram 00:07:56.995 ================== 00:07:56.995 Range in us Cumulative Count 00:07:56.995 7.778 - 7.828: 0.0782% ( 13) 00:07:56.995 7.828 - 7.877: 1.2992% ( 203) 00:07:56.995 7.877 - 7.926: 7.7710% ( 1076) 00:07:56.995 7.926 - 7.975: 22.8016% ( 2499) 00:07:56.995 7.975 - 8.025: 40.3284% ( 2914) 00:07:56.995 8.025 - 8.074: 54.2704% ( 2318) 00:07:56.995 8.074 - 8.123: 67.4065% ( 2184) 00:07:56.995 8.123 - 8.172: 78.7381% ( 1884) 00:07:56.995 8.172 - 8.222: 86.0820% ( 1221) 00:07:56.995 8.222 - 8.271: 91.0502% ( 826) 00:07:56.995 8.271 - 8.320: 93.6305% ( 429) 00:07:56.995 8.320 - 8.369: 95.4770% ( 307) 00:07:56.995 8.369 - 8.418: 96.5235% ( 174) 00:07:56.995 8.418 - 8.468: 97.2212% ( 116) 00:07:56.995 8.468 - 8.517: 97.6242% ( 67) 00:07:56.995 8.517 - 8.566: 97.8648% ( 40) 00:07:56.995 8.566 - 8.615: 98.0573% ( 32) 00:07:56.995 8.615 - 8.665: 98.1294% ( 12) 00:07:56.995 8.665 - 8.714: 98.1535% ( 4) 00:07:56.995 8.714 - 8.763: 98.1776% ( 4) 00:07:56.995 8.763 - 8.812: 98.2016% ( 4) 00:07:56.995 8.812 - 8.862: 98.2136% ( 2) 00:07:56.995 8.862 - 8.911: 98.2197% ( 1) 00:07:56.995 8.911 - 8.960: 98.2257% ( 1) 00:07:56.995 8.960 - 9.009: 98.2317% ( 1) 00:07:56.995 9.058 - 9.108: 98.2377% ( 1) 00:07:56.995 9.108 - 9.157: 98.2437% ( 1) 00:07:56.995 9.206 - 9.255: 98.2618% ( 3) 00:07:56.995 9.255 - 9.305: 98.2738% ( 2) 00:07:56.995 9.403 - 9.452: 98.3039% ( 5) 00:07:56.995 9.452 - 9.502: 98.3099% ( 1) 00:07:56.995 9.502 - 9.551: 98.3159% ( 1) 00:07:56.995 9.551 - 9.600: 98.3219% ( 1) 00:07:56.995 9.649 - 9.698: 98.3279% ( 1) 00:07:56.995 9.698 - 9.748: 98.3399% ( 2) 00:07:56.995 9.945 - 9.994: 98.3520% ( 2) 00:07:56.995 9.994 - 10.043: 98.3640% ( 2) 00:07:56.995 10.043 - 10.092: 98.3821% ( 3) 00:07:56.995 10.142 - 10.191: 98.3881% ( 1) 00:07:56.995 10.240 - 10.289: 98.3941% ( 1) 00:07:56.995 10.338 - 10.388: 98.4001% ( 1) 00:07:56.995 10.388 - 10.437: 98.4061% ( 1) 00:07:56.995 10.782 - 10.831: 98.4181% ( 2) 00:07:56.995 10.831 - 10.880: 98.4242% ( 1) 00:07:56.995 10.929 - 10.978: 98.4302% ( 1) 00:07:56.995 11.028 - 11.077: 98.4482% ( 3) 00:07:56.995 11.077 - 11.126: 98.4602% ( 2) 00:07:56.995 11.126 - 11.175: 98.4663% ( 1) 00:07:56.995 11.175 - 11.225: 98.4783% ( 2) 00:07:56.995 11.225 - 11.274: 98.4843% ( 1) 00:07:56.995 11.274 - 11.323: 98.4963% ( 2) 00:07:56.995 11.323 - 11.372: 98.5023% ( 1) 00:07:56.995 11.372 - 11.422: 98.5084% ( 1) 00:07:56.995 11.422 - 11.471: 98.5144% ( 1) 00:07:56.995 11.520 - 11.569: 98.5204% ( 1) 00:07:56.995 11.569 - 11.618: 98.5264% ( 1) 00:07:56.995 11.618 - 11.668: 98.5324% ( 1) 00:07:56.995 11.717 - 11.766: 98.5444% ( 2) 00:07:56.995 11.766 - 11.815: 98.5565% ( 2) 00:07:56.995 11.914 - 11.963: 98.5625% ( 1) 00:07:56.995 11.963 - 12.012: 98.5685% ( 1) 00:07:56.995 12.062 - 12.111: 98.5805% ( 2) 00:07:56.995 12.160 - 12.209: 98.5866% ( 1) 00:07:56.995 12.258 - 12.308: 98.5926% ( 1) 00:07:56.995 12.603 - 12.702: 98.5986% ( 1) 00:07:56.995 12.702 - 12.800: 98.6046% ( 1) 00:07:56.995 12.800 - 12.898: 98.6106% ( 1) 00:07:56.995 12.997 - 13.095: 98.6166% ( 1) 00:07:56.995 13.194 - 13.292: 98.6226% ( 1) 00:07:56.995 13.292 - 13.391: 98.6287% ( 1) 00:07:56.995 13.391 - 13.489: 98.6527% ( 4) 00:07:56.995 13.489 - 13.588: 98.7369% ( 14) 00:07:56.995 13.588 - 13.686: 98.7911% ( 9) 00:07:56.995 13.686 - 13.785: 98.8392% ( 8) 00:07:56.995 13.785 - 13.883: 98.8632% ( 4) 00:07:56.995 13.883 - 13.982: 98.9113% ( 8) 00:07:56.995 13.982 - 14.080: 99.0136% ( 17) 00:07:56.995 14.080 - 14.178: 99.0798% ( 11) 00:07:56.995 14.178 - 14.277: 99.1700% ( 15) 00:07:56.995 14.277 - 14.375: 99.2181% ( 8) 00:07:56.995 14.375 - 14.474: 99.2662% ( 8) 00:07:56.995 14.474 - 14.572: 99.3083% ( 7) 00:07:56.995 14.572 - 14.671: 99.3745% ( 11) 00:07:56.995 14.671 - 14.769: 99.4767% ( 17) 00:07:56.995 14.769 - 14.868: 99.5188% ( 7) 00:07:56.995 14.868 - 14.966: 99.5609% ( 7) 00:07:56.995 14.966 - 15.065: 99.5730% ( 2) 00:07:56.995 15.065 - 15.163: 99.6271% ( 9) 00:07:56.995 15.163 - 15.262: 99.6451% ( 3) 00:07:56.995 15.262 - 15.360: 99.6572% ( 2) 00:07:56.995 15.360 - 15.458: 99.6812% ( 4) 00:07:56.995 15.458 - 15.557: 99.7113% ( 5) 00:07:56.995 15.557 - 15.655: 99.7233% ( 2) 00:07:56.995 15.754 - 15.852: 99.7414% ( 3) 00:07:56.995 15.951 - 16.049: 99.7534% ( 2) 00:07:56.995 16.049 - 16.148: 99.7835% ( 5) 00:07:56.995 16.148 - 16.246: 99.7895% ( 1) 00:07:56.995 16.246 - 16.345: 99.7955% ( 1) 00:07:56.995 16.837 - 16.935: 99.8015% ( 1) 00:07:56.995 16.935 - 17.034: 99.8075% ( 1) 00:07:56.995 17.034 - 17.132: 99.8196% ( 2) 00:07:56.995 17.231 - 17.329: 99.8256% ( 1) 00:07:56.995 17.428 - 17.526: 99.8316% ( 1) 00:07:56.995 17.822 - 17.920: 99.8376% ( 1) 00:07:56.995 17.920 - 18.018: 99.8436% ( 1) 00:07:56.995 18.018 - 18.117: 99.8556% ( 2) 00:07:56.995 18.215 - 18.314: 99.8617% ( 1) 00:07:56.995 18.314 - 18.412: 99.8677% ( 1) 00:07:56.995 18.412 - 18.511: 99.8797% ( 2) 00:07:56.995 18.511 - 18.609: 99.8857% ( 1) 00:07:56.995 18.609 - 18.708: 99.8917% ( 1) 00:07:56.995 18.905 - 19.003: 99.8978% ( 1) 00:07:56.995 19.003 - 19.102: 99.9038% ( 1) 00:07:56.995 19.200 - 19.298: 99.9098% ( 1) 00:07:56.995 20.480 - 20.578: 99.9158% ( 1) 00:07:56.995 20.677 - 20.775: 99.9218% ( 1) 00:07:56.995 20.775 - 20.874: 99.9338% ( 2) 00:07:56.995 20.874 - 20.972: 99.9399% ( 1) 00:07:56.995 21.071 - 21.169: 99.9459% ( 1) 00:07:56.995 21.268 - 21.366: 99.9519% ( 1) 00:07:56.995 21.563 - 21.662: 99.9579% ( 1) 00:07:56.995 24.320 - 24.418: 99.9639% ( 1) 00:07:56.995 30.326 - 30.523: 99.9699% ( 1) 00:07:56.995 38.597 - 38.794: 99.9759% ( 1) 00:07:56.995 74.437 - 74.831: 99.9820% ( 1) 00:07:56.995 98.855 - 99.249: 99.9880% ( 1) 00:07:56.995 176.443 - 177.231: 99.9940% ( 1) 00:07:56.995 182.745 - 183.532: 100.0000% ( 1) 00:07:56.995 00:07:56.995 ************************************ 00:07:56.995 END TEST nvme_overhead 00:07:56.995 ************************************ 00:07:56.995 00:07:56.995 real 0m1.230s 00:07:56.995 user 0m1.068s 00:07:56.995 sys 0m0.109s 00:07:56.995 19:09:40 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.995 19:09:40 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:56.995 19:09:40 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:56.995 19:09:40 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:56.995 19:09:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.995 19:09:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.995 ************************************ 00:07:56.995 START TEST nvme_arbitration 00:07:56.995 ************************************ 00:07:56.995 19:09:40 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:00.296 Initializing NVMe Controllers 00:08:00.296 Attached to 0000:00:10.0 00:08:00.296 Attached to 0000:00:11.0 00:08:00.296 Attached to 0000:00:13.0 00:08:00.296 Attached to 0000:00:12.0 00:08:00.296 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:00.296 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:00.296 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:00.296 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:00.296 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:00.296 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:00.296 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:00.296 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:00.296 Initialization complete. Launching workers. 00:08:00.296 Starting thread on core 1 with urgent priority queue 00:08:00.296 Starting thread on core 2 with urgent priority queue 00:08:00.296 Starting thread on core 3 with urgent priority queue 00:08:00.296 Starting thread on core 0 with urgent priority queue 00:08:00.296 QEMU NVMe Ctrl (12340 ) core 0: 810.67 IO/s 123.36 secs/100000 ios 00:08:00.296 QEMU NVMe Ctrl (12342 ) core 0: 810.67 IO/s 123.36 secs/100000 ios 00:08:00.296 QEMU NVMe Ctrl (12341 ) core 1: 853.33 IO/s 117.19 secs/100000 ios 00:08:00.296 QEMU NVMe Ctrl (12342 ) core 1: 853.33 IO/s 117.19 secs/100000 ios 00:08:00.296 QEMU NVMe Ctrl (12343 ) core 2: 938.67 IO/s 106.53 secs/100000 ios 00:08:00.296 QEMU NVMe Ctrl (12342 ) core 3: 896.00 IO/s 111.61 secs/100000 ios 00:08:00.296 ======================================================== 00:08:00.296 00:08:00.296 00:08:00.296 real 0m3.315s 00:08:00.296 user 0m9.226s 00:08:00.296 sys 0m0.109s 00:08:00.296 ************************************ 00:08:00.296 END TEST nvme_arbitration 00:08:00.296 ************************************ 00:08:00.296 19:09:44 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.296 19:09:44 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:00.296 19:09:44 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:00.296 19:09:44 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:00.296 19:09:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.296 19:09:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.296 ************************************ 00:08:00.296 START TEST nvme_single_aen 00:08:00.296 ************************************ 00:08:00.296 19:09:44 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:00.296 Asynchronous Event Request test 00:08:00.296 Attached to 0000:00:10.0 00:08:00.296 Attached to 0000:00:11.0 00:08:00.296 Attached to 0000:00:13.0 00:08:00.296 Attached to 0000:00:12.0 00:08:00.296 Reset controller to setup AER completions for this process 00:08:00.296 Registering asynchronous event callbacks... 00:08:00.296 Getting orig temperature thresholds of all controllers 00:08:00.296 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:00.296 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:00.296 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:00.296 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:00.296 Setting all controllers temperature threshold low to trigger AER 00:08:00.296 Waiting for all controllers temperature threshold to be set lower 00:08:00.296 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:00.296 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:00.296 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:00.296 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:00.296 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:00.296 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:00.296 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:00.296 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:00.297 Waiting for all controllers to trigger AER and reset threshold 00:08:00.297 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.297 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.297 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.297 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.297 Cleaning up... 00:08:00.297 00:08:00.297 real 0m0.232s 00:08:00.297 user 0m0.075s 00:08:00.297 sys 0m0.107s 00:08:00.297 ************************************ 00:08:00.297 END TEST nvme_single_aen 00:08:00.297 ************************************ 00:08:00.297 19:09:44 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.297 19:09:44 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:00.297 19:09:44 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:00.297 19:09:44 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.297 19:09:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.297 19:09:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.297 ************************************ 00:08:00.297 START TEST nvme_doorbell_aers 00:08:00.297 ************************************ 00:08:00.297 19:09:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:00.297 19:09:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:00.297 19:09:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:00.297 19:09:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:00.297 19:09:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:00.297 19:09:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:00.297 19:09:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:00.297 19:09:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:00.297 19:09:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:00.297 19:09:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:00.558 19:09:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:00.558 19:09:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:00.558 19:09:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:00.558 19:09:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:00.558 [2024-12-16 19:09:44.898055] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65012) is not found. Dropping the request. 00:08:10.542 Executing: test_write_invalid_db 00:08:10.542 Waiting for AER completion... 00:08:10.542 Failure: test_write_invalid_db 00:08:10.542 00:08:10.542 Executing: test_invalid_db_write_overflow_sq 00:08:10.542 Waiting for AER completion... 00:08:10.542 Failure: test_invalid_db_write_overflow_sq 00:08:10.542 00:08:10.542 Executing: test_invalid_db_write_overflow_cq 00:08:10.542 Waiting for AER completion... 00:08:10.542 Failure: test_invalid_db_write_overflow_cq 00:08:10.542 00:08:10.542 19:09:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:10.542 19:09:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:10.800 [2024-12-16 19:09:54.939846] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65012) is not found. Dropping the request. 00:08:20.787 Executing: test_write_invalid_db 00:08:20.787 Waiting for AER completion... 00:08:20.787 Failure: test_write_invalid_db 00:08:20.787 00:08:20.787 Executing: test_invalid_db_write_overflow_sq 00:08:20.787 Waiting for AER completion... 00:08:20.787 Failure: test_invalid_db_write_overflow_sq 00:08:20.787 00:08:20.787 Executing: test_invalid_db_write_overflow_cq 00:08:20.787 Waiting for AER completion... 00:08:20.787 Failure: test_invalid_db_write_overflow_cq 00:08:20.787 00:08:20.787 19:10:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:20.787 19:10:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:20.787 [2024-12-16 19:10:04.986630] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65012) is not found. Dropping the request. 00:08:30.745 Executing: test_write_invalid_db 00:08:30.745 Waiting for AER completion... 00:08:30.745 Failure: test_write_invalid_db 00:08:30.745 00:08:30.745 Executing: test_invalid_db_write_overflow_sq 00:08:30.745 Waiting for AER completion... 00:08:30.745 Failure: test_invalid_db_write_overflow_sq 00:08:30.745 00:08:30.745 Executing: test_invalid_db_write_overflow_cq 00:08:30.745 Waiting for AER completion... 00:08:30.745 Failure: test_invalid_db_write_overflow_cq 00:08:30.745 00:08:30.745 19:10:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:30.745 19:10:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:30.745 [2024-12-16 19:10:15.007816] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65012) is not found. Dropping the request. 00:08:40.705 Executing: test_write_invalid_db 00:08:40.705 Waiting for AER completion... 00:08:40.705 Failure: test_write_invalid_db 00:08:40.705 00:08:40.705 Executing: test_invalid_db_write_overflow_sq 00:08:40.705 Waiting for AER completion... 00:08:40.705 Failure: test_invalid_db_write_overflow_sq 00:08:40.705 00:08:40.705 Executing: test_invalid_db_write_overflow_cq 00:08:40.705 Waiting for AER completion... 00:08:40.705 Failure: test_invalid_db_write_overflow_cq 00:08:40.705 00:08:40.705 00:08:40.705 real 0m40.220s 00:08:40.705 user 0m34.114s 00:08:40.705 sys 0m5.729s 00:08:40.705 19:10:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.705 19:10:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:40.705 ************************************ 00:08:40.705 END TEST nvme_doorbell_aers 00:08:40.705 ************************************ 00:08:40.705 19:10:24 nvme -- nvme/nvme.sh@97 -- # uname 00:08:40.705 19:10:24 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:40.705 19:10:24 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:40.705 19:10:24 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:40.705 19:10:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.705 19:10:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.705 ************************************ 00:08:40.705 START TEST nvme_multi_aen 00:08:40.705 ************************************ 00:08:40.705 19:10:24 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:40.962 [2024-12-16 19:10:25.083721] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65012) is not found. Dropping the request. 00:08:40.962 [2024-12-16 19:10:25.083781] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65012) is not found. Dropping the request. 00:08:40.962 [2024-12-16 19:10:25.083793] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65012) is not found. Dropping the request. 00:08:40.962 [2024-12-16 19:10:25.085188] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65012) is not found. Dropping the request. 00:08:40.962 [2024-12-16 19:10:25.085213] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65012) is not found. Dropping the request. 00:08:40.962 [2024-12-16 19:10:25.085221] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65012) is not found. Dropping the request. 00:08:40.962 [2024-12-16 19:10:25.086339] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65012) is not found. Dropping the request. 00:08:40.962 [2024-12-16 19:10:25.086364] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65012) is not found. Dropping the request. 00:08:40.962 [2024-12-16 19:10:25.086371] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65012) is not found. Dropping the request. 00:08:40.962 [2024-12-16 19:10:25.087779] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65012) is not found. Dropping the request. 00:08:40.962 [2024-12-16 19:10:25.087875] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65012) is not found. Dropping the request. 00:08:40.962 [2024-12-16 19:10:25.087929] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65012) is not found. Dropping the request. 00:08:40.962 Child process pid: 65538 00:08:40.962 [Child] Asynchronous Event Request test 00:08:40.962 [Child] Attached to 0000:00:10.0 00:08:40.962 [Child] Attached to 0000:00:11.0 00:08:40.962 [Child] Attached to 0000:00:13.0 00:08:40.962 [Child] Attached to 0000:00:12.0 00:08:40.962 [Child] Registering asynchronous event callbacks... 00:08:40.962 [Child] Getting orig temperature thresholds of all controllers 00:08:40.962 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:40.962 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:40.962 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:40.962 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:40.962 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:40.962 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:40.962 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:40.962 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:40.962 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:40.962 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.962 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.962 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.962 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.962 [Child] Cleaning up... 00:08:41.219 Asynchronous Event Request test 00:08:41.219 Attached to 0000:00:10.0 00:08:41.219 Attached to 0000:00:11.0 00:08:41.219 Attached to 0000:00:13.0 00:08:41.219 Attached to 0000:00:12.0 00:08:41.219 Reset controller to setup AER completions for this process 00:08:41.219 Registering asynchronous event callbacks... 00:08:41.219 Getting orig temperature thresholds of all controllers 00:08:41.219 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.219 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.219 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.219 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.219 Setting all controllers temperature threshold low to trigger AER 00:08:41.219 Waiting for all controllers temperature threshold to be set lower 00:08:41.219 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.219 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:41.219 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.219 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:41.219 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.219 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:41.219 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.219 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:41.219 Waiting for all controllers to trigger AER and reset threshold 00:08:41.219 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.219 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.219 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.219 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.219 Cleaning up... 00:08:41.219 00:08:41.219 real 0m0.455s 00:08:41.219 user 0m0.146s 00:08:41.219 sys 0m0.188s 00:08:41.219 19:10:25 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.219 19:10:25 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:41.219 ************************************ 00:08:41.219 END TEST nvme_multi_aen 00:08:41.219 ************************************ 00:08:41.219 19:10:25 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:41.219 19:10:25 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:41.219 19:10:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.219 19:10:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:41.219 ************************************ 00:08:41.219 START TEST nvme_startup 00:08:41.219 ************************************ 00:08:41.219 19:10:25 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:41.477 Initializing NVMe Controllers 00:08:41.477 Attached to 0000:00:10.0 00:08:41.477 Attached to 0000:00:11.0 00:08:41.477 Attached to 0000:00:13.0 00:08:41.477 Attached to 0000:00:12.0 00:08:41.477 Initialization complete. 00:08:41.477 Time used:137389.688 (us). 00:08:41.477 00:08:41.477 real 0m0.196s 00:08:41.477 user 0m0.060s 00:08:41.477 sys 0m0.089s 00:08:41.477 19:10:25 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.477 ************************************ 00:08:41.477 19:10:25 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:41.477 END TEST nvme_startup 00:08:41.477 ************************************ 00:08:41.477 19:10:25 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:41.477 19:10:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.477 19:10:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.477 19:10:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:41.477 ************************************ 00:08:41.477 START TEST nvme_multi_secondary 00:08:41.477 ************************************ 00:08:41.477 19:10:25 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:41.477 19:10:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65588 00:08:41.477 19:10:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65589 00:08:41.477 19:10:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:41.477 19:10:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:41.477 19:10:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:44.763 Initializing NVMe Controllers 00:08:44.763 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:44.763 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:44.763 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:44.764 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:44.764 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:44.764 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:44.764 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:44.764 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:44.764 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:44.764 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:44.764 Initialization complete. Launching workers. 00:08:44.764 ======================================================== 00:08:44.764 Latency(us) 00:08:44.764 Device Information : IOPS MiB/s Average min max 00:08:44.764 PCIE (0000:00:10.0) NSID 1 from core 2: 1600.98 6.25 9992.44 2340.99 20560.24 00:08:44.764 PCIE (0000:00:11.0) NSID 1 from core 2: 1600.98 6.25 10008.37 2225.07 21063.72 00:08:44.764 PCIE (0000:00:13.0) NSID 1 from core 2: 1600.98 6.25 10010.68 2091.18 20914.68 00:08:44.764 PCIE (0000:00:12.0) NSID 1 from core 2: 1600.98 6.25 10013.99 2149.64 20250.29 00:08:44.764 PCIE (0000:00:12.0) NSID 2 from core 2: 1600.98 6.25 10029.87 2015.76 20504.48 00:08:44.764 PCIE (0000:00:12.0) NSID 3 from core 2: 1600.98 6.25 10031.25 2040.54 21081.18 00:08:44.764 ======================================================== 00:08:44.764 Total : 9605.90 37.52 10014.43 2015.76 21081.18 00:08:44.764 00:08:44.764 Initializing NVMe Controllers 00:08:44.764 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:44.764 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:44.764 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:44.764 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:44.764 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:44.764 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:44.764 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:44.764 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:44.764 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:44.764 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:44.764 Initialization complete. Launching workers. 00:08:44.764 ======================================================== 00:08:44.764 Latency(us) 00:08:44.764 Device Information : IOPS MiB/s Average min max 00:08:44.764 PCIE (0000:00:10.0) NSID 1 from core 1: 3316.79 12.96 4822.13 1355.47 11209.96 00:08:44.764 PCIE (0000:00:11.0) NSID 1 from core 1: 3316.79 12.96 4824.25 1376.16 12342.70 00:08:44.764 PCIE (0000:00:13.0) NSID 1 from core 1: 3316.79 12.96 4824.97 1317.95 11242.30 00:08:44.764 PCIE (0000:00:12.0) NSID 1 from core 1: 3316.79 12.96 4824.98 1356.88 11479.86 00:08:44.764 PCIE (0000:00:12.0) NSID 2 from core 1: 3316.79 12.96 4826.36 1297.99 12355.32 00:08:44.764 PCIE (0000:00:12.0) NSID 3 from core 1: 3322.12 12.98 4819.06 1304.78 10729.86 00:08:44.764 ======================================================== 00:08:44.764 Total : 19906.05 77.76 4823.63 1297.99 12355.32 00:08:44.764 00:08:44.764 19:10:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65588 00:08:46.664 Initializing NVMe Controllers 00:08:46.664 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:46.664 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:46.664 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:46.664 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:46.664 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:46.664 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:46.664 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:46.664 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:46.664 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:46.664 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:46.664 Initialization complete. Launching workers. 00:08:46.664 ======================================================== 00:08:46.664 Latency(us) 00:08:46.664 Device Information : IOPS MiB/s Average min max 00:08:46.664 PCIE (0000:00:10.0) NSID 1 from core 0: 6408.90 25.03 2495.17 777.07 11038.63 00:08:46.664 PCIE (0000:00:11.0) NSID 1 from core 0: 6408.90 25.03 2496.25 796.48 10290.83 00:08:46.664 PCIE (0000:00:13.0) NSID 1 from core 0: 6408.90 25.03 2496.22 799.07 8691.89 00:08:46.664 PCIE (0000:00:12.0) NSID 1 from core 0: 6408.90 25.03 2496.20 799.83 9405.41 00:08:46.664 PCIE (0000:00:12.0) NSID 2 from core 0: 6408.90 25.03 2496.17 801.04 10504.19 00:08:46.664 PCIE (0000:00:12.0) NSID 3 from core 0: 6412.10 25.05 2494.91 796.42 10232.82 00:08:46.664 ======================================================== 00:08:46.664 Total : 38456.59 150.22 2495.82 777.07 11038.63 00:08:46.664 00:08:46.664 19:10:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65589 00:08:46.664 19:10:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65654 00:08:46.664 19:10:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:46.664 19:10:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65655 00:08:46.664 19:10:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:46.664 19:10:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:49.946 Initializing NVMe Controllers 00:08:49.946 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:49.946 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:49.946 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:49.946 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:49.946 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:49.946 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:49.946 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:49.946 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:49.946 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:49.946 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:49.946 Initialization complete. Launching workers. 00:08:49.946 ======================================================== 00:08:49.946 Latency(us) 00:08:49.946 Device Information : IOPS MiB/s Average min max 00:08:49.946 PCIE (0000:00:10.0) NSID 1 from core 0: 3811.21 14.89 4196.45 1326.63 8826.88 00:08:49.946 PCIE (0000:00:11.0) NSID 1 from core 0: 3811.21 14.89 4197.59 1370.75 8967.39 00:08:49.946 PCIE (0000:00:13.0) NSID 1 from core 0: 3811.21 14.89 4197.69 1221.81 8989.80 00:08:49.946 PCIE (0000:00:12.0) NSID 1 from core 0: 3811.21 14.89 4197.61 1275.20 9294.25 00:08:49.946 PCIE (0000:00:12.0) NSID 2 from core 0: 3811.21 14.89 4197.51 1332.49 9736.09 00:08:49.946 PCIE (0000:00:12.0) NSID 3 from core 0: 3811.21 14.89 4197.46 1143.90 10665.61 00:08:49.946 ======================================================== 00:08:49.946 Total : 22867.26 89.33 4197.39 1143.90 10665.61 00:08:49.946 00:08:49.946 Initializing NVMe Controllers 00:08:49.946 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:49.946 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:49.946 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:49.946 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:49.946 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:49.947 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:49.947 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:49.947 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:49.947 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:49.947 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:49.947 Initialization complete. Launching workers. 00:08:49.947 ======================================================== 00:08:49.947 Latency(us) 00:08:49.947 Device Information : IOPS MiB/s Average min max 00:08:49.947 PCIE (0000:00:10.0) NSID 1 from core 1: 3577.90 13.98 4470.01 717.47 12511.71 00:08:49.947 PCIE (0000:00:11.0) NSID 1 from core 1: 3577.90 13.98 4471.47 745.31 12414.65 00:08:49.947 PCIE (0000:00:13.0) NSID 1 from core 1: 3577.90 13.98 4471.57 735.48 10828.51 00:08:49.947 PCIE (0000:00:12.0) NSID 1 from core 1: 3577.90 13.98 4471.49 740.35 10686.99 00:08:49.947 PCIE (0000:00:12.0) NSID 2 from core 1: 3577.90 13.98 4471.64 744.90 11660.46 00:08:49.947 PCIE (0000:00:12.0) NSID 3 from core 1: 3577.90 13.98 4471.62 740.40 11536.84 00:08:49.947 ======================================================== 00:08:49.947 Total : 21467.39 83.86 4471.30 717.47 12511.71 00:08:49.947 00:08:51.848 Initializing NVMe Controllers 00:08:51.849 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:51.849 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:51.849 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:51.849 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:51.849 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:51.849 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:51.849 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:51.849 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:51.849 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:51.849 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:51.849 Initialization complete. Launching workers. 00:08:51.849 ======================================================== 00:08:51.849 Latency(us) 00:08:51.849 Device Information : IOPS MiB/s Average min max 00:08:51.849 PCIE (0000:00:10.0) NSID 1 from core 2: 2326.18 9.09 6876.98 980.80 20828.55 00:08:51.849 PCIE (0000:00:11.0) NSID 1 from core 2: 2326.18 9.09 6877.39 987.57 21781.36 00:08:51.849 PCIE (0000:00:13.0) NSID 1 from core 2: 2329.38 9.10 6867.85 985.14 22079.00 00:08:51.849 PCIE (0000:00:12.0) NSID 1 from core 2: 2329.38 9.10 6867.74 1000.71 23151.07 00:08:51.849 PCIE (0000:00:12.0) NSID 2 from core 2: 2329.38 9.10 6867.97 988.71 23340.54 00:08:51.849 PCIE (0000:00:12.0) NSID 3 from core 2: 2329.38 9.10 6867.52 979.29 23754.66 00:08:51.849 ======================================================== 00:08:51.849 Total : 13969.88 54.57 6870.91 979.29 23754.66 00:08:51.849 00:08:51.849 19:10:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65654 00:08:51.849 19:10:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65655 00:08:51.849 00:08:51.849 real 0m10.542s 00:08:51.849 user 0m18.359s 00:08:51.849 sys 0m0.701s 00:08:51.849 19:10:36 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.849 ************************************ 00:08:51.849 END TEST nvme_multi_secondary 00:08:51.849 ************************************ 00:08:51.849 19:10:36 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:52.109 19:10:36 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:52.109 19:10:36 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:52.109 19:10:36 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64620 ]] 00:08:52.109 19:10:36 nvme -- common/autotest_common.sh@1094 -- # kill 64620 00:08:52.109 19:10:36 nvme -- common/autotest_common.sh@1095 -- # wait 64620 00:08:52.109 [2024-12-16 19:10:36.245437] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65537) is not found. Dropping the request. 00:08:52.109 [2024-12-16 19:10:36.245547] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65537) is not found. Dropping the request. 00:08:52.109 [2024-12-16 19:10:36.245591] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65537) is not found. Dropping the request. 00:08:52.109 [2024-12-16 19:10:36.245619] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65537) is not found. Dropping the request. 00:08:52.109 [2024-12-16 19:10:36.249595] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65537) is not found. Dropping the request. 00:08:52.109 [2024-12-16 19:10:36.249681] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65537) is not found. Dropping the request. 00:08:52.109 [2024-12-16 19:10:36.249707] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65537) is not found. Dropping the request. 00:08:52.109 [2024-12-16 19:10:36.249734] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65537) is not found. Dropping the request. 00:08:52.109 [2024-12-16 19:10:36.252316] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65537) is not found. Dropping the request. 00:08:52.109 [2024-12-16 19:10:36.252349] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65537) is not found. Dropping the request. 00:08:52.109 [2024-12-16 19:10:36.252359] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65537) is not found. Dropping the request. 00:08:52.109 [2024-12-16 19:10:36.252369] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65537) is not found. Dropping the request. 00:08:52.109 [2024-12-16 19:10:36.253849] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65537) is not found. Dropping the request. 00:08:52.109 [2024-12-16 19:10:36.253883] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65537) is not found. Dropping the request. 00:08:52.109 [2024-12-16 19:10:36.253892] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65537) is not found. Dropping the request. 00:08:52.109 [2024-12-16 19:10:36.253902] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65537) is not found. Dropping the request. 00:08:52.109 [2024-12-16 19:10:36.368707] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:08:52.109 19:10:36 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:52.109 19:10:36 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:52.109 19:10:36 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:52.109 19:10:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.109 19:10:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.109 19:10:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.109 ************************************ 00:08:52.109 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:52.109 ************************************ 00:08:52.109 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:52.371 * Looking for test storage... 00:08:52.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:52.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.371 --rc genhtml_branch_coverage=1 00:08:52.371 --rc genhtml_function_coverage=1 00:08:52.371 --rc genhtml_legend=1 00:08:52.371 --rc geninfo_all_blocks=1 00:08:52.371 --rc geninfo_unexecuted_blocks=1 00:08:52.371 00:08:52.371 ' 00:08:52.371 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:52.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.372 --rc genhtml_branch_coverage=1 00:08:52.372 --rc genhtml_function_coverage=1 00:08:52.372 --rc genhtml_legend=1 00:08:52.372 --rc geninfo_all_blocks=1 00:08:52.372 --rc geninfo_unexecuted_blocks=1 00:08:52.372 00:08:52.372 ' 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:52.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.372 --rc genhtml_branch_coverage=1 00:08:52.372 --rc genhtml_function_coverage=1 00:08:52.372 --rc genhtml_legend=1 00:08:52.372 --rc geninfo_all_blocks=1 00:08:52.372 --rc geninfo_unexecuted_blocks=1 00:08:52.372 00:08:52.372 ' 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:52.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.372 --rc genhtml_branch_coverage=1 00:08:52.372 --rc genhtml_function_coverage=1 00:08:52.372 --rc genhtml_legend=1 00:08:52.372 --rc geninfo_all_blocks=1 00:08:52.372 --rc geninfo_unexecuted_blocks=1 00:08:52.372 00:08:52.372 ' 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65821 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65821 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65821 ']' 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.372 19:10:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:52.372 [2024-12-16 19:10:36.695572] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:52.372 [2024-12-16 19:10:36.696116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65821 ] 00:08:52.634 [2024-12-16 19:10:36.882506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.893 [2024-12-16 19:10:37.040009] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.893 [2024-12-16 19:10:37.040390] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.893 [2024-12-16 19:10:37.040700] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.893 [2024-12-16 19:10:37.040781] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.459 19:10:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.459 19:10:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:53.459 19:10:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:53.459 19:10:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.460 19:10:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:53.460 nvme0n1 00:08:53.460 19:10:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.460 19:10:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:53.460 19:10:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_0V8Rw.txt 00:08:53.460 19:10:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:53.460 19:10:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.460 19:10:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:53.460 true 00:08:53.460 19:10:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.460 19:10:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:53.460 19:10:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1734376237 00:08:53.460 19:10:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65844 00:08:53.460 19:10:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:53.460 19:10:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:53.460 19:10:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:55.988 [2024-12-16 19:10:39.796693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:55.988 [2024-12-16 19:10:39.796973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:55.988 [2024-12-16 19:10:39.796999] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:55.988 [2024-12-16 19:10:39.797013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.988 [2024-12-16 19:10:39.800130] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65844 00:08:55.988 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65844 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65844 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_0V8Rw.txt 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_0V8Rw.txt 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65821 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65821 ']' 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65821 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65821 00:08:55.988 killing process with pid 65821 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65821' 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65821 00:08:55.988 19:10:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65821 00:08:56.921 19:10:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:56.921 19:10:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:56.921 00:08:56.921 real 0m4.818s 00:08:56.921 user 0m16.795s 00:08:56.921 sys 0m0.586s 00:08:56.921 ************************************ 00:08:56.921 19:10:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.921 19:10:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:56.921 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:56.921 ************************************ 00:08:56.921 19:10:41 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:56.921 19:10:41 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:56.921 19:10:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.921 19:10:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.921 19:10:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.179 ************************************ 00:08:57.179 START TEST nvme_fio 00:08:57.179 ************************************ 00:08:57.179 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:57.179 19:10:41 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:57.179 19:10:41 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:57.179 19:10:41 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:57.179 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:57.179 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:57.179 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:57.179 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:57.179 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:57.179 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:57.179 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:57.179 19:10:41 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:57.179 19:10:41 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:57.179 19:10:41 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:57.179 19:10:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:57.180 19:10:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:57.441 19:10:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:57.441 19:10:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:57.441 19:10:41 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:57.441 19:10:41 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:57.441 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:57.441 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:57.441 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:57.441 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:57.441 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:57.441 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:57.441 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:57.441 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:57.441 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:57.441 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:57.441 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:57.441 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:57.441 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:57.441 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:57.441 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:57.441 19:10:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:57.702 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:57.702 fio-3.35 00:08:57.702 Starting 1 thread 00:09:01.894 00:09:01.894 test: (groupid=0, jobs=1): err= 0: pid=65979: Mon Dec 16 19:10:45 2024 00:09:01.894 read: IOPS=18.8k, BW=73.4MiB/s (76.9MB/s)(147MiB/2001msec) 00:09:01.894 slat (usec): min=3, max=132, avg= 6.25, stdev= 2.57 00:09:01.894 clat (usec): min=280, max=12463, avg=3376.13, stdev=1071.31 00:09:01.894 lat (usec): min=285, max=12497, avg=3382.39, stdev=1072.66 00:09:01.894 clat percentiles (usec): 00:09:01.894 | 1.00th=[ 2212], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2638], 00:09:01.894 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 2999], 60.00th=[ 3228], 00:09:01.894 | 70.00th=[ 3523], 80.00th=[ 3884], 90.00th=[ 4752], 95.00th=[ 5866], 00:09:01.894 | 99.00th=[ 7177], 99.50th=[ 7570], 99.90th=[ 8848], 99.95th=[10290], 00:09:01.894 | 99.99th=[12387] 00:09:01.894 bw ( KiB/s): min=59384, max=82624, per=96.78%, avg=72730.67, stdev=11998.69, samples=3 00:09:01.894 iops : min=14846, max=20656, avg=18182.67, stdev=2999.67, samples=3 00:09:01.894 write: IOPS=18.8k, BW=73.4MiB/s (77.0MB/s)(147MiB/2001msec); 0 zone resets 00:09:01.894 slat (nsec): min=4106, max=81472, avg=6593.28, stdev=2611.62 00:09:01.894 clat (usec): min=289, max=12393, avg=3411.74, stdev=1085.10 00:09:01.894 lat (usec): min=295, max=12407, avg=3418.33, stdev=1086.47 00:09:01.894 clat percentiles (usec): 00:09:01.894 | 1.00th=[ 2245], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2671], 00:09:01.894 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 3032], 60.00th=[ 3261], 00:09:01.894 | 70.00th=[ 3556], 80.00th=[ 3916], 90.00th=[ 4817], 95.00th=[ 5932], 00:09:01.894 | 99.00th=[ 7242], 99.50th=[ 7701], 99.90th=[ 8979], 99.95th=[10421], 00:09:01.894 | 99.99th=[10945] 00:09:01.894 bw ( KiB/s): min=59704, max=82552, per=96.63%, avg=72640.00, stdev=11720.33, samples=3 00:09:01.894 iops : min=14926, max=20638, avg=18160.00, stdev=2930.08, samples=3 00:09:01.894 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:09:01.894 lat (msec) : 2=0.22%, 4=82.01%, 10=17.66%, 20=0.07% 00:09:01.894 cpu : usr=99.10%, sys=0.10%, ctx=4, majf=0, minf=607 00:09:01.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:01.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:01.894 issued rwts: total=37592,37607,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.894 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:01.894 00:09:01.894 Run status group 0 (all jobs): 00:09:01.894 READ: bw=73.4MiB/s (76.9MB/s), 73.4MiB/s-73.4MiB/s (76.9MB/s-76.9MB/s), io=147MiB (154MB), run=2001-2001msec 00:09:01.894 WRITE: bw=73.4MiB/s (77.0MB/s), 73.4MiB/s-73.4MiB/s (77.0MB/s-77.0MB/s), io=147MiB (154MB), run=2001-2001msec 00:09:01.894 ----------------------------------------------------- 00:09:01.894 Suppressions used: 00:09:01.894 count bytes template 00:09:01.894 1 32 /usr/src/fio/parse.c 00:09:01.894 1 8 libtcmalloc_minimal.so 00:09:01.894 ----------------------------------------------------- 00:09:01.894 00:09:01.894 19:10:45 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:01.894 19:10:45 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:01.894 19:10:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:01.894 19:10:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:01.894 19:10:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:01.894 19:10:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:02.155 19:10:46 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:02.155 19:10:46 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:02.155 19:10:46 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:02.155 19:10:46 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:02.155 19:10:46 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:02.155 19:10:46 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:02.155 19:10:46 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:02.155 19:10:46 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:02.155 19:10:46 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:02.155 19:10:46 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:02.155 19:10:46 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:02.155 19:10:46 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:02.156 19:10:46 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:02.156 19:10:46 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:02.156 19:10:46 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:02.156 19:10:46 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:02.156 19:10:46 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:02.156 19:10:46 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:02.415 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:02.415 fio-3.35 00:09:02.415 Starting 1 thread 00:09:07.706 00:09:07.706 test: (groupid=0, jobs=1): err= 0: pid=66040: Mon Dec 16 19:10:51 2024 00:09:07.706 read: IOPS=19.7k, BW=77.0MiB/s (80.7MB/s)(154MiB/2001msec) 00:09:07.706 slat (nsec): min=4207, max=76650, avg=5431.64, stdev=2726.85 00:09:07.706 clat (usec): min=616, max=8613, avg=3225.63, stdev=1069.75 00:09:07.706 lat (usec): min=628, max=8626, avg=3231.06, stdev=1071.01 00:09:07.706 clat percentiles (usec): 00:09:07.706 | 1.00th=[ 1926], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2507], 00:09:07.706 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2802], 60.00th=[ 2966], 00:09:07.706 | 70.00th=[ 3228], 80.00th=[ 3982], 90.00th=[ 4948], 95.00th=[ 5538], 00:09:07.706 | 99.00th=[ 6652], 99.50th=[ 6980], 99.90th=[ 7832], 99.95th=[ 8029], 00:09:07.706 | 99.99th=[ 8356] 00:09:07.706 bw ( KiB/s): min=75424, max=78432, per=98.05%, avg=77285.33, stdev=1626.37, samples=3 00:09:07.706 iops : min=18856, max=19608, avg=19321.33, stdev=406.59, samples=3 00:09:07.706 write: IOPS=19.7k, BW=76.9MiB/s (80.6MB/s)(154MiB/2001msec); 0 zone resets 00:09:07.706 slat (nsec): min=4265, max=72432, avg=5602.22, stdev=2775.22 00:09:07.706 clat (usec): min=745, max=8563, avg=3256.11, stdev=1084.86 00:09:07.706 lat (usec): min=757, max=8576, avg=3261.71, stdev=1086.14 00:09:07.706 clat percentiles (usec): 00:09:07.706 | 1.00th=[ 1942], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2507], 00:09:07.706 | 30.00th=[ 2606], 40.00th=[ 2704], 50.00th=[ 2802], 60.00th=[ 2999], 00:09:07.706 | 70.00th=[ 3261], 80.00th=[ 4015], 90.00th=[ 5014], 95.00th=[ 5604], 00:09:07.706 | 99.00th=[ 6783], 99.50th=[ 7046], 99.90th=[ 7963], 99.95th=[ 8094], 00:09:07.706 | 99.99th=[ 8356] 00:09:07.706 bw ( KiB/s): min=75776, max=78648, per=98.35%, avg=77408.00, stdev=1475.58, samples=3 00:09:07.706 iops : min=18944, max=19662, avg=19352.00, stdev=368.90, samples=3 00:09:07.706 lat (usec) : 750=0.01%, 1000=0.01% 00:09:07.706 lat (msec) : 2=1.26%, 4=78.78%, 10=19.95% 00:09:07.706 cpu : usr=98.90%, sys=0.15%, ctx=13, majf=0, minf=608 00:09:07.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:07.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:07.706 issued rwts: total=39432,39372,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.706 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:07.706 00:09:07.706 Run status group 0 (all jobs): 00:09:07.706 READ: bw=77.0MiB/s (80.7MB/s), 77.0MiB/s-77.0MiB/s (80.7MB/s-80.7MB/s), io=154MiB (162MB), run=2001-2001msec 00:09:07.706 WRITE: bw=76.9MiB/s (80.6MB/s), 76.9MiB/s-76.9MiB/s (80.6MB/s-80.6MB/s), io=154MiB (161MB), run=2001-2001msec 00:09:07.706 ----------------------------------------------------- 00:09:07.706 Suppressions used: 00:09:07.706 count bytes template 00:09:07.706 1 32 /usr/src/fio/parse.c 00:09:07.706 1 8 libtcmalloc_minimal.so 00:09:07.706 ----------------------------------------------------- 00:09:07.706 00:09:07.706 19:10:51 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:07.706 19:10:51 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:07.706 19:10:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:07.706 19:10:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:07.967 19:10:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:07.967 19:10:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:08.226 19:10:52 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:08.226 19:10:52 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:08.226 19:10:52 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:08.226 19:10:52 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:08.226 19:10:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:08.226 19:10:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:08.226 19:10:52 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:08.226 19:10:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:08.226 19:10:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:08.226 19:10:52 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:08.226 19:10:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:08.226 19:10:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:08.226 19:10:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:08.226 19:10:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:08.226 19:10:52 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:08.226 19:10:52 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:08.226 19:10:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:08.226 19:10:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:08.484 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:08.484 fio-3.35 00:09:08.484 Starting 1 thread 00:09:16.656 00:09:16.656 test: (groupid=0, jobs=1): err= 0: pid=66097: Mon Dec 16 19:10:59 2024 00:09:16.656 read: IOPS=24.2k, BW=94.3MiB/s (98.9MB/s)(189MiB/2001msec) 00:09:16.656 slat (usec): min=4, max=226, avg= 4.97, stdev= 2.53 00:09:16.656 clat (usec): min=225, max=7318, avg=2643.19, stdev=766.44 00:09:16.656 lat (usec): min=230, max=7331, avg=2648.16, stdev=767.81 00:09:16.656 clat percentiles (usec): 00:09:16.656 | 1.00th=[ 1713], 5.00th=[ 2114], 10.00th=[ 2311], 20.00th=[ 2376], 00:09:16.656 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2442], 60.00th=[ 2474], 00:09:16.656 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 2999], 95.00th=[ 4621], 00:09:16.656 | 99.00th=[ 6194], 99.50th=[ 6390], 99.90th=[ 7046], 99.95th=[ 7177], 00:09:16.656 | 99.99th=[ 7177] 00:09:16.656 bw ( KiB/s): min=93392, max=99616, per=99.73%, avg=96344.00, stdev=3124.31, samples=3 00:09:16.656 iops : min=23348, max=24904, avg=24086.00, stdev=781.08, samples=3 00:09:16.656 write: IOPS=24.0k, BW=93.7MiB/s (98.3MB/s)(188MiB/2001msec); 0 zone resets 00:09:16.656 slat (nsec): min=4288, max=56500, avg=5280.21, stdev=2207.15 00:09:16.656 clat (usec): min=204, max=7264, avg=2652.62, stdev=776.89 00:09:16.656 lat (usec): min=208, max=7277, avg=2657.90, stdev=778.27 00:09:16.656 clat percentiles (usec): 00:09:16.656 | 1.00th=[ 1696], 5.00th=[ 2114], 10.00th=[ 2311], 20.00th=[ 2376], 00:09:16.656 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2442], 60.00th=[ 2474], 00:09:16.656 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 3032], 95.00th=[ 4686], 00:09:16.656 | 99.00th=[ 6194], 99.50th=[ 6390], 99.90th=[ 6980], 99.95th=[ 7111], 00:09:16.656 | 99.99th=[ 7177] 00:09:16.656 bw ( KiB/s): min=94520, max=99120, per=100.00%, avg=96474.67, stdev=2376.50, samples=3 00:09:16.656 iops : min=23630, max=24780, avg=24118.67, stdev=594.13, samples=3 00:09:16.656 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:09:16.656 lat (msec) : 2=3.27%, 4=89.92%, 10=6.75% 00:09:16.656 cpu : usr=99.05%, sys=0.20%, ctx=2, majf=0, minf=607 00:09:16.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:16.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.656 issued rwts: total=48327,48004,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.656 00:09:16.656 Run status group 0 (all jobs): 00:09:16.656 READ: bw=94.3MiB/s (98.9MB/s), 94.3MiB/s-94.3MiB/s (98.9MB/s-98.9MB/s), io=189MiB (198MB), run=2001-2001msec 00:09:16.656 WRITE: bw=93.7MiB/s (98.3MB/s), 93.7MiB/s-93.7MiB/s (98.3MB/s-98.3MB/s), io=188MiB (197MB), run=2001-2001msec 00:09:16.656 ----------------------------------------------------- 00:09:16.656 Suppressions used: 00:09:16.656 count bytes template 00:09:16.656 1 32 /usr/src/fio/parse.c 00:09:16.656 1 8 libtcmalloc_minimal.so 00:09:16.656 ----------------------------------------------------- 00:09:16.656 00:09:16.656 19:10:59 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:16.656 19:10:59 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:16.656 19:10:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:16.656 19:10:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:16.656 19:10:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:16.656 19:10:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:16.656 19:11:00 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:16.656 19:11:00 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:16.656 19:11:00 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:16.656 19:11:00 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:16.656 19:11:00 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:16.656 19:11:00 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:16.656 19:11:00 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:16.656 19:11:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:16.656 19:11:00 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:16.656 19:11:00 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:16.656 19:11:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:16.656 19:11:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:16.656 19:11:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:16.656 19:11:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:16.656 19:11:00 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:16.656 19:11:00 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:16.656 19:11:00 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:16.656 19:11:00 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:16.656 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:16.656 fio-3.35 00:09:16.656 Starting 1 thread 00:09:24.796 00:09:24.796 test: (groupid=0, jobs=1): err= 0: pid=66162: Mon Dec 16 19:11:07 2024 00:09:24.796 read: IOPS=21.0k, BW=82.2MiB/s (86.2MB/s)(164MiB/2001msec) 00:09:24.796 slat (nsec): min=3867, max=60492, avg=5668.05, stdev=2308.16 00:09:24.796 clat (usec): min=982, max=9758, avg=3035.66, stdev=914.36 00:09:24.796 lat (usec): min=986, max=9816, avg=3041.33, stdev=915.52 00:09:24.796 clat percentiles (usec): 00:09:24.796 | 1.00th=[ 2089], 5.00th=[ 2409], 10.00th=[ 2474], 20.00th=[ 2540], 00:09:24.796 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2769], 00:09:24.796 | 70.00th=[ 2900], 80.00th=[ 3228], 90.00th=[ 4228], 95.00th=[ 5276], 00:09:24.796 | 99.00th=[ 6587], 99.50th=[ 6915], 99.90th=[ 7439], 99.95th=[ 7701], 00:09:24.796 | 99.99th=[ 9634] 00:09:24.796 bw ( KiB/s): min=83632, max=84736, per=100.00%, avg=84242.67, stdev=561.27, samples=3 00:09:24.796 iops : min=20908, max=21184, avg=21060.67, stdev=140.32, samples=3 00:09:24.796 write: IOPS=20.9k, BW=81.7MiB/s (85.7MB/s)(163MiB/2001msec); 0 zone resets 00:09:24.796 slat (nsec): min=4044, max=81953, avg=6085.61, stdev=2392.15 00:09:24.796 clat (usec): min=1023, max=9633, avg=3041.88, stdev=917.47 00:09:24.796 lat (usec): min=1027, max=9657, avg=3047.97, stdev=918.60 00:09:24.796 clat percentiles (usec): 00:09:24.796 | 1.00th=[ 2114], 5.00th=[ 2409], 10.00th=[ 2474], 20.00th=[ 2573], 00:09:24.796 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2704], 60.00th=[ 2769], 00:09:24.796 | 70.00th=[ 2900], 80.00th=[ 3261], 90.00th=[ 4228], 95.00th=[ 5276], 00:09:24.796 | 99.00th=[ 6652], 99.50th=[ 6980], 99.90th=[ 7504], 99.95th=[ 7963], 00:09:24.796 | 99.99th=[ 9372] 00:09:24.796 bw ( KiB/s): min=83968, max=84592, per=100.00%, avg=84325.33, stdev=321.73, samples=3 00:09:24.796 iops : min=20992, max=21148, avg=21081.33, stdev=80.43, samples=3 00:09:24.796 lat (usec) : 1000=0.01% 00:09:24.796 lat (msec) : 2=0.69%, 4=87.68%, 10=11.63% 00:09:24.796 cpu : usr=99.30%, sys=0.00%, ctx=7, majf=0, minf=605 00:09:24.796 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:24.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.796 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.796 issued rwts: total=42089,41853,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.796 00:09:24.796 Run status group 0 (all jobs): 00:09:24.796 READ: bw=82.2MiB/s (86.2MB/s), 82.2MiB/s-82.2MiB/s (86.2MB/s-86.2MB/s), io=164MiB (172MB), run=2001-2001msec 00:09:24.796 WRITE: bw=81.7MiB/s (85.7MB/s), 81.7MiB/s-81.7MiB/s (85.7MB/s-85.7MB/s), io=163MiB (171MB), run=2001-2001msec 00:09:24.796 ----------------------------------------------------- 00:09:24.796 Suppressions used: 00:09:24.796 count bytes template 00:09:24.796 1 32 /usr/src/fio/parse.c 00:09:24.796 1 8 libtcmalloc_minimal.so 00:09:24.796 ----------------------------------------------------- 00:09:24.796 00:09:24.796 19:11:08 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:24.796 19:11:08 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:24.796 00:09:24.796 real 0m26.889s 00:09:24.796 user 0m16.102s 00:09:24.796 sys 0m19.510s 00:09:24.796 19:11:08 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.796 ************************************ 00:09:24.796 END TEST nvme_fio 00:09:24.796 19:11:08 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:24.796 ************************************ 00:09:24.796 00:09:24.796 real 1m36.040s 00:09:24.796 user 3m36.504s 00:09:24.796 sys 0m30.196s 00:09:24.796 19:11:08 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.796 ************************************ 00:09:24.796 END TEST nvme 00:09:24.796 ************************************ 00:09:24.796 19:11:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:24.796 19:11:08 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:24.796 19:11:08 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:24.796 19:11:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.796 19:11:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.796 19:11:08 -- common/autotest_common.sh@10 -- # set +x 00:09:24.796 ************************************ 00:09:24.796 START TEST nvme_scc 00:09:24.796 ************************************ 00:09:24.796 19:11:08 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:24.796 * Looking for test storage... 00:09:24.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:24.796 19:11:08 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:24.796 19:11:08 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:24.796 19:11:08 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:24.796 19:11:08 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:24.796 19:11:08 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.796 19:11:08 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.796 19:11:08 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.796 19:11:08 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.796 19:11:08 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.796 19:11:08 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.796 19:11:08 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.796 19:11:08 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.796 19:11:08 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.796 19:11:08 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.796 19:11:08 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.796 19:11:08 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:24.796 19:11:08 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:24.797 19:11:08 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.797 19:11:08 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.797 19:11:08 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:24.797 19:11:08 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:24.797 19:11:08 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.797 19:11:08 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:24.797 19:11:08 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.797 19:11:08 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:24.797 19:11:08 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:24.797 19:11:08 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.797 19:11:08 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:24.797 19:11:08 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.797 19:11:08 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.797 19:11:08 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.797 19:11:08 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:24.797 19:11:08 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.797 19:11:08 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:24.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.797 --rc genhtml_branch_coverage=1 00:09:24.797 --rc genhtml_function_coverage=1 00:09:24.797 --rc genhtml_legend=1 00:09:24.797 --rc geninfo_all_blocks=1 00:09:24.797 --rc geninfo_unexecuted_blocks=1 00:09:24.797 00:09:24.797 ' 00:09:24.797 19:11:08 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:24.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.797 --rc genhtml_branch_coverage=1 00:09:24.797 --rc genhtml_function_coverage=1 00:09:24.797 --rc genhtml_legend=1 00:09:24.797 --rc geninfo_all_blocks=1 00:09:24.797 --rc geninfo_unexecuted_blocks=1 00:09:24.797 00:09:24.797 ' 00:09:24.797 19:11:08 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:24.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.797 --rc genhtml_branch_coverage=1 00:09:24.797 --rc genhtml_function_coverage=1 00:09:24.797 --rc genhtml_legend=1 00:09:24.797 --rc geninfo_all_blocks=1 00:09:24.797 --rc geninfo_unexecuted_blocks=1 00:09:24.797 00:09:24.797 ' 00:09:24.797 19:11:08 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:24.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.797 --rc genhtml_branch_coverage=1 00:09:24.797 --rc genhtml_function_coverage=1 00:09:24.797 --rc genhtml_legend=1 00:09:24.797 --rc geninfo_all_blocks=1 00:09:24.797 --rc geninfo_unexecuted_blocks=1 00:09:24.797 00:09:24.797 ' 00:09:24.797 19:11:08 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:24.797 19:11:08 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:24.797 19:11:08 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:24.797 19:11:08 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:24.797 19:11:08 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:24.797 19:11:08 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.797 19:11:08 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.797 19:11:08 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.797 19:11:08 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.797 19:11:08 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.797 19:11:08 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.797 19:11:08 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.797 19:11:08 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:24.797 19:11:08 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.797 19:11:08 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:24.797 19:11:08 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:24.797 19:11:08 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:24.797 19:11:08 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:24.797 19:11:08 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:24.797 19:11:08 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:24.797 19:11:08 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:24.797 19:11:08 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:24.797 19:11:08 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:24.797 19:11:08 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:24.797 19:11:08 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:24.797 19:11:08 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:24.797 19:11:08 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:24.797 19:11:08 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:24.797 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:24.797 Waiting for block devices as requested 00:09:24.797 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.797 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.797 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:25.056 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:30.359 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:30.359 19:11:14 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:30.359 19:11:14 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:30.359 19:11:14 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:30.359 19:11:14 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:30.359 19:11:14 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:30.359 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.360 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.361 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:30.362 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:30.363 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.364 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:30.365 19:11:14 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:30.365 19:11:14 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:30.365 19:11:14 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:30.365 19:11:14 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.365 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:30.366 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.367 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:30.368 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.369 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.370 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:30.371 19:11:14 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:30.371 19:11:14 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:30.371 19:11:14 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:30.371 19:11:14 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:30.371 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.372 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:30.373 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.374 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.375 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.376 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.377 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:30.378 19:11:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.379 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.380 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:30.381 19:11:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.382 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:30.383 19:11:14 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:30.383 19:11:14 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:30.383 19:11:14 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:30.383 19:11:14 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.383 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:30.384 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.385 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:30.386 19:11:14 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:30.386 19:11:14 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:30.387 19:11:14 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:30.387 19:11:14 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:30.387 19:11:14 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:30.387 19:11:14 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:30.387 19:11:14 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:30.387 19:11:14 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:30.387 19:11:14 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:30.387 19:11:14 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:30.387 19:11:14 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:30.387 19:11:14 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:30.387 19:11:14 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:30.387 19:11:14 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:30.958 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:31.246 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.246 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.246 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.246 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.507 19:11:15 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:31.507 19:11:15 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:31.507 19:11:15 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.507 19:11:15 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:31.507 ************************************ 00:09:31.507 START TEST nvme_simple_copy 00:09:31.507 ************************************ 00:09:31.507 19:11:15 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:31.768 Initializing NVMe Controllers 00:09:31.768 Attaching to 0000:00:10.0 00:09:31.768 Controller supports SCC. Attached to 0000:00:10.0 00:09:31.769 Namespace ID: 1 size: 6GB 00:09:31.769 Initialization complete. 00:09:31.769 00:09:31.769 Controller QEMU NVMe Ctrl (12340 ) 00:09:31.769 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:31.769 Namespace Block Size:4096 00:09:31.769 Writing LBAs 0 to 63 with Random Data 00:09:31.769 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:31.769 LBAs matching Written Data: 64 00:09:31.769 00:09:31.769 real 0m0.264s 00:09:31.769 user 0m0.097s 00:09:31.769 sys 0m0.065s 00:09:31.769 19:11:15 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.769 ************************************ 00:09:31.769 END TEST nvme_simple_copy 00:09:31.769 ************************************ 00:09:31.769 19:11:15 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:31.769 ************************************ 00:09:31.769 END TEST nvme_scc 00:09:31.769 ************************************ 00:09:31.769 00:09:31.769 real 0m7.651s 00:09:31.769 user 0m1.097s 00:09:31.769 sys 0m1.451s 00:09:31.769 19:11:15 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.769 19:11:15 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:31.769 19:11:15 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:31.769 19:11:15 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:31.769 19:11:15 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:31.769 19:11:15 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:31.769 19:11:15 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:31.769 19:11:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.769 19:11:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.769 19:11:15 -- common/autotest_common.sh@10 -- # set +x 00:09:31.769 ************************************ 00:09:31.769 START TEST nvme_fdp 00:09:31.769 ************************************ 00:09:31.769 19:11:15 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:31.769 * Looking for test storage... 00:09:31.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:31.769 19:11:16 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:31.769 19:11:16 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:31.769 19:11:16 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:31.769 19:11:16 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:31.769 19:11:16 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.769 19:11:16 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:31.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.769 --rc genhtml_branch_coverage=1 00:09:31.769 --rc genhtml_function_coverage=1 00:09:31.769 --rc genhtml_legend=1 00:09:31.769 --rc geninfo_all_blocks=1 00:09:31.769 --rc geninfo_unexecuted_blocks=1 00:09:31.769 00:09:31.769 ' 00:09:31.769 19:11:16 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:31.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.769 --rc genhtml_branch_coverage=1 00:09:31.769 --rc genhtml_function_coverage=1 00:09:31.769 --rc genhtml_legend=1 00:09:31.769 --rc geninfo_all_blocks=1 00:09:31.769 --rc geninfo_unexecuted_blocks=1 00:09:31.769 00:09:31.769 ' 00:09:31.769 19:11:16 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:31.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.769 --rc genhtml_branch_coverage=1 00:09:31.769 --rc genhtml_function_coverage=1 00:09:31.769 --rc genhtml_legend=1 00:09:31.769 --rc geninfo_all_blocks=1 00:09:31.769 --rc geninfo_unexecuted_blocks=1 00:09:31.769 00:09:31.769 ' 00:09:31.769 19:11:16 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:31.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.769 --rc genhtml_branch_coverage=1 00:09:31.769 --rc genhtml_function_coverage=1 00:09:31.769 --rc genhtml_legend=1 00:09:31.769 --rc geninfo_all_blocks=1 00:09:31.769 --rc geninfo_unexecuted_blocks=1 00:09:31.769 00:09:31.769 ' 00:09:31.769 19:11:16 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:31.769 19:11:16 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:31.769 19:11:16 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:31.769 19:11:16 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:31.769 19:11:16 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.769 19:11:16 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.769 19:11:16 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.769 19:11:16 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.769 19:11:16 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.769 19:11:16 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:31.769 19:11:16 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.769 19:11:16 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:31.769 19:11:16 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:31.769 19:11:16 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:31.769 19:11:16 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:31.769 19:11:16 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:31.769 19:11:16 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:31.769 19:11:16 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:31.769 19:11:16 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:31.769 19:11:16 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:31.769 19:11:16 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.769 19:11:16 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:32.340 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:32.340 Waiting for block devices as requested 00:09:32.340 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.340 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.600 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.600 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.891 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:37.891 19:11:21 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:37.891 19:11:21 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:37.891 19:11:21 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:37.892 19:11:21 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:37.892 19:11:21 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:37.892 19:11:21 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.892 19:11:21 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.892 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.893 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.894 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.895 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.896 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.897 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:37.898 19:11:21 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:37.898 19:11:21 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:37.898 19:11:21 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.898 19:11:21 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.898 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:37.899 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:37.900 19:11:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.901 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.902 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.903 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:37.904 19:11:22 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:37.904 19:11:22 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:37.904 19:11:22 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.904 19:11:22 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:37.904 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.905 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:37.906 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:37.907 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.908 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.909 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.910 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.911 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.912 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.913 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.914 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.915 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:37.916 19:11:22 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:37.916 19:11:22 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:37.916 19:11:22 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.916 19:11:22 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.916 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.917 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.179 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.180 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:38.181 19:11:22 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:38.181 19:11:22 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:38.181 19:11:22 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:38.181 19:11:22 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:38.181 19:11:22 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:38.442 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:39.014 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.014 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.014 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.014 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.014 19:11:23 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:39.014 19:11:23 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:39.014 19:11:23 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.014 19:11:23 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:39.014 ************************************ 00:09:39.014 START TEST nvme_flexible_data_placement 00:09:39.014 ************************************ 00:09:39.014 19:11:23 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:39.275 Initializing NVMe Controllers 00:09:39.275 Attaching to 0000:00:13.0 00:09:39.275 Controller supports FDP Attached to 0000:00:13.0 00:09:39.275 Namespace ID: 1 Endurance Group ID: 1 00:09:39.275 Initialization complete. 00:09:39.275 00:09:39.275 ================================== 00:09:39.275 == FDP tests for Namespace: #01 == 00:09:39.275 ================================== 00:09:39.275 00:09:39.275 Get Feature: FDP: 00:09:39.275 ================= 00:09:39.275 Enabled: Yes 00:09:39.275 FDP configuration Index: 0 00:09:39.275 00:09:39.275 FDP configurations log page 00:09:39.275 =========================== 00:09:39.275 Number of FDP configurations: 1 00:09:39.275 Version: 0 00:09:39.275 Size: 112 00:09:39.275 FDP Configuration Descriptor: 0 00:09:39.275 Descriptor Size: 96 00:09:39.275 Reclaim Group Identifier format: 2 00:09:39.275 FDP Volatile Write Cache: Not Present 00:09:39.275 FDP Configuration: Valid 00:09:39.275 Vendor Specific Size: 0 00:09:39.275 Number of Reclaim Groups: 2 00:09:39.275 Number of Recalim Unit Handles: 8 00:09:39.275 Max Placement Identifiers: 128 00:09:39.275 Number of Namespaces Suppprted: 256 00:09:39.275 Reclaim unit Nominal Size: 6000000 bytes 00:09:39.275 Estimated Reclaim Unit Time Limit: Not Reported 00:09:39.275 RUH Desc #000: RUH Type: Initially Isolated 00:09:39.275 RUH Desc #001: RUH Type: Initially Isolated 00:09:39.275 RUH Desc #002: RUH Type: Initially Isolated 00:09:39.275 RUH Desc #003: RUH Type: Initially Isolated 00:09:39.275 RUH Desc #004: RUH Type: Initially Isolated 00:09:39.275 RUH Desc #005: RUH Type: Initially Isolated 00:09:39.275 RUH Desc #006: RUH Type: Initially Isolated 00:09:39.275 RUH Desc #007: RUH Type: Initially Isolated 00:09:39.275 00:09:39.275 FDP reclaim unit handle usage log page 00:09:39.275 ====================================== 00:09:39.275 Number of Reclaim Unit Handles: 8 00:09:39.275 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:39.275 RUH Usage Desc #001: RUH Attributes: Unused 00:09:39.275 RUH Usage Desc #002: RUH Attributes: Unused 00:09:39.275 RUH Usage Desc #003: RUH Attributes: Unused 00:09:39.275 RUH Usage Desc #004: RUH Attributes: Unused 00:09:39.275 RUH Usage Desc #005: RUH Attributes: Unused 00:09:39.275 RUH Usage Desc #006: RUH Attributes: Unused 00:09:39.275 RUH Usage Desc #007: RUH Attributes: Unused 00:09:39.275 00:09:39.275 FDP statistics log page 00:09:39.275 ======================= 00:09:39.275 Host bytes with metadata written: 1207795712 00:09:39.275 Media bytes with metadata written: 1207959552 00:09:39.275 Media bytes erased: 0 00:09:39.275 00:09:39.275 FDP Reclaim unit handle status 00:09:39.275 ============================== 00:09:39.275 Number of RUHS descriptors: 2 00:09:39.276 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000028 00:09:39.276 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:39.276 00:09:39.276 FDP write on placement id: 0 success 00:09:39.276 00:09:39.276 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:39.276 00:09:39.276 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:39.276 00:09:39.276 Get Feature: FDP Events for Placement handle: #0 00:09:39.276 ======================== 00:09:39.276 Number of FDP Events: 6 00:09:39.276 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:39.276 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:39.276 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:39.276 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:39.276 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:39.276 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:39.276 00:09:39.276 FDP events log page 00:09:39.276 =================== 00:09:39.276 Number of FDP events: 1 00:09:39.276 FDP Event #0: 00:09:39.276 Event Type: RU Not Written to Capacity 00:09:39.276 Placement Identifier: Valid 00:09:39.276 NSID: Valid 00:09:39.276 Location: Valid 00:09:39.276 Placement Identifier: 0 00:09:39.276 Event Timestamp: 6 00:09:39.276 Namespace Identifier: 1 00:09:39.276 Reclaim Group Identifier: 0 00:09:39.276 Reclaim Unit Handle Identifier: 0 00:09:39.276 00:09:39.276 FDP test passed 00:09:39.276 00:09:39.276 real 0m0.240s 00:09:39.276 user 0m0.077s 00:09:39.276 sys 0m0.063s 00:09:39.276 19:11:23 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.276 ************************************ 00:09:39.276 END TEST nvme_flexible_data_placement 00:09:39.276 ************************************ 00:09:39.276 19:11:23 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:39.276 ************************************ 00:09:39.276 END TEST nvme_fdp 00:09:39.276 ************************************ 00:09:39.276 00:09:39.276 real 0m7.601s 00:09:39.276 user 0m1.100s 00:09:39.276 sys 0m1.351s 00:09:39.276 19:11:23 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.276 19:11:23 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:39.276 19:11:23 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:39.276 19:11:23 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:39.276 19:11:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.276 19:11:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.276 19:11:23 -- common/autotest_common.sh@10 -- # set +x 00:09:39.276 ************************************ 00:09:39.276 START TEST nvme_rpc 00:09:39.276 ************************************ 00:09:39.276 19:11:23 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:39.537 * Looking for test storage... 00:09:39.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:39.537 19:11:23 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:39.537 19:11:23 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:39.537 19:11:23 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:39.537 19:11:23 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.537 19:11:23 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:39.537 19:11:23 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.537 19:11:23 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:39.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.537 --rc genhtml_branch_coverage=1 00:09:39.537 --rc genhtml_function_coverage=1 00:09:39.537 --rc genhtml_legend=1 00:09:39.537 --rc geninfo_all_blocks=1 00:09:39.537 --rc geninfo_unexecuted_blocks=1 00:09:39.537 00:09:39.537 ' 00:09:39.537 19:11:23 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:39.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.537 --rc genhtml_branch_coverage=1 00:09:39.537 --rc genhtml_function_coverage=1 00:09:39.537 --rc genhtml_legend=1 00:09:39.537 --rc geninfo_all_blocks=1 00:09:39.537 --rc geninfo_unexecuted_blocks=1 00:09:39.537 00:09:39.537 ' 00:09:39.537 19:11:23 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:39.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.537 --rc genhtml_branch_coverage=1 00:09:39.537 --rc genhtml_function_coverage=1 00:09:39.537 --rc genhtml_legend=1 00:09:39.537 --rc geninfo_all_blocks=1 00:09:39.537 --rc geninfo_unexecuted_blocks=1 00:09:39.537 00:09:39.537 ' 00:09:39.537 19:11:23 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:39.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.537 --rc genhtml_branch_coverage=1 00:09:39.537 --rc genhtml_function_coverage=1 00:09:39.537 --rc genhtml_legend=1 00:09:39.537 --rc geninfo_all_blocks=1 00:09:39.537 --rc geninfo_unexecuted_blocks=1 00:09:39.537 00:09:39.537 ' 00:09:39.537 19:11:23 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:39.537 19:11:23 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:39.537 19:11:23 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:39.537 19:11:23 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:39.537 19:11:23 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:39.537 19:11:23 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:39.537 19:11:23 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:39.537 19:11:23 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:39.537 19:11:23 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:39.537 19:11:23 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:39.537 19:11:23 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:39.537 19:11:23 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:39.537 19:11:23 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:39.537 19:11:23 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:39.537 19:11:23 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:39.537 19:11:23 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67539 00:09:39.537 19:11:23 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:39.538 19:11:23 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:39.538 19:11:23 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67539 00:09:39.538 19:11:23 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67539 ']' 00:09:39.538 19:11:23 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.538 19:11:23 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.538 19:11:23 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.538 19:11:23 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.538 19:11:23 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.538 [2024-12-16 19:11:23.887442] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:39.538 [2024-12-16 19:11:23.887685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67539 ] 00:09:39.798 [2024-12-16 19:11:24.046722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:40.058 [2024-12-16 19:11:24.154054] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.058 [2024-12-16 19:11:24.154066] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.626 19:11:24 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.626 19:11:24 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:40.626 19:11:24 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:40.886 Nvme0n1 00:09:40.886 19:11:25 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:40.886 19:11:25 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:41.146 request: 00:09:41.146 { 00:09:41.146 "bdev_name": "Nvme0n1", 00:09:41.146 "filename": "non_existing_file", 00:09:41.146 "method": "bdev_nvme_apply_firmware", 00:09:41.146 "req_id": 1 00:09:41.146 } 00:09:41.146 Got JSON-RPC error response 00:09:41.146 response: 00:09:41.147 { 00:09:41.147 "code": -32603, 00:09:41.147 "message": "open file failed." 00:09:41.147 } 00:09:41.147 19:11:25 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:41.147 19:11:25 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:41.147 19:11:25 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:41.147 19:11:25 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:41.147 19:11:25 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67539 00:09:41.147 19:11:25 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67539 ']' 00:09:41.147 19:11:25 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67539 00:09:41.147 19:11:25 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:41.147 19:11:25 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.147 19:11:25 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67539 00:09:41.147 killing process with pid 67539 00:09:41.147 19:11:25 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.147 19:11:25 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.147 19:11:25 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67539' 00:09:41.147 19:11:25 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67539 00:09:41.147 19:11:25 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67539 00:09:43.056 ************************************ 00:09:43.056 END TEST nvme_rpc 00:09:43.056 ************************************ 00:09:43.056 00:09:43.056 real 0m3.355s 00:09:43.056 user 0m6.361s 00:09:43.056 sys 0m0.514s 00:09:43.056 19:11:26 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.056 19:11:26 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.056 19:11:26 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:43.056 19:11:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.056 19:11:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.056 19:11:26 -- common/autotest_common.sh@10 -- # set +x 00:09:43.056 ************************************ 00:09:43.056 START TEST nvme_rpc_timeouts 00:09:43.056 ************************************ 00:09:43.056 19:11:26 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:43.056 * Looking for test storage... 00:09:43.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:43.056 19:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:43.056 19:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:09:43.056 19:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:43.056 19:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.056 19:11:27 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:43.056 19:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.056 19:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:43.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.056 --rc genhtml_branch_coverage=1 00:09:43.056 --rc genhtml_function_coverage=1 00:09:43.056 --rc genhtml_legend=1 00:09:43.056 --rc geninfo_all_blocks=1 00:09:43.056 --rc geninfo_unexecuted_blocks=1 00:09:43.056 00:09:43.056 ' 00:09:43.057 19:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:43.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.057 --rc genhtml_branch_coverage=1 00:09:43.057 --rc genhtml_function_coverage=1 00:09:43.057 --rc genhtml_legend=1 00:09:43.057 --rc geninfo_all_blocks=1 00:09:43.057 --rc geninfo_unexecuted_blocks=1 00:09:43.057 00:09:43.057 ' 00:09:43.057 19:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:43.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.057 --rc genhtml_branch_coverage=1 00:09:43.057 --rc genhtml_function_coverage=1 00:09:43.057 --rc genhtml_legend=1 00:09:43.057 --rc geninfo_all_blocks=1 00:09:43.057 --rc geninfo_unexecuted_blocks=1 00:09:43.057 00:09:43.057 ' 00:09:43.057 19:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:43.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.057 --rc genhtml_branch_coverage=1 00:09:43.057 --rc genhtml_function_coverage=1 00:09:43.057 --rc genhtml_legend=1 00:09:43.057 --rc geninfo_all_blocks=1 00:09:43.057 --rc geninfo_unexecuted_blocks=1 00:09:43.057 00:09:43.057 ' 00:09:43.057 19:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:43.057 19:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67604 00:09:43.057 19:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67604 00:09:43.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.057 19:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67637 00:09:43.057 19:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:43.057 19:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67637 00:09:43.057 19:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67637 ']' 00:09:43.057 19:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.057 19:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.057 19:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.057 19:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:43.057 19:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.057 19:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:43.057 [2024-12-16 19:11:27.206282] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:43.057 [2024-12-16 19:11:27.206542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67637 ] 00:09:43.057 [2024-12-16 19:11:27.356316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:43.315 [2024-12-16 19:11:27.445990] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.315 [2024-12-16 19:11:27.446072] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.882 19:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.882 Checking default timeout settings: 00:09:43.882 19:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:43.882 19:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:43.882 19:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:44.140 Making settings changes with rpc: 00:09:44.140 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:44.140 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:44.398 Check default vs. modified settings: 00:09:44.398 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:44.398 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:44.656 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:44.656 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67604 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67604 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.657 Setting action_on_timeout is changed as expected. 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67604 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67604 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.657 Setting timeout_us is changed as expected. 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67604 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67604 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.657 Setting timeout_admin_us is changed as expected. 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67604 /tmp/settings_modified_67604 00:09:44.657 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67637 00:09:44.657 19:11:28 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67637 ']' 00:09:44.657 19:11:28 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67637 00:09:44.657 19:11:28 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:44.657 19:11:28 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.657 19:11:28 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67637 00:09:44.657 killing process with pid 67637 00:09:44.657 19:11:28 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.657 19:11:28 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.657 19:11:28 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67637' 00:09:44.657 19:11:28 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67637 00:09:44.657 19:11:28 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67637 00:09:46.034 RPC TIMEOUT SETTING TEST PASSED. 00:09:46.034 19:11:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:46.034 00:09:46.034 real 0m3.144s 00:09:46.034 user 0m6.028s 00:09:46.034 sys 0m0.521s 00:09:46.034 ************************************ 00:09:46.034 END TEST nvme_rpc_timeouts 00:09:46.034 ************************************ 00:09:46.034 19:11:30 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.034 19:11:30 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:46.034 19:11:30 -- spdk/autotest.sh@239 -- # uname -s 00:09:46.034 19:11:30 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:46.034 19:11:30 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:46.034 19:11:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:46.034 19:11:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.034 19:11:30 -- common/autotest_common.sh@10 -- # set +x 00:09:46.034 ************************************ 00:09:46.034 START TEST sw_hotplug 00:09:46.034 ************************************ 00:09:46.034 19:11:30 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:46.034 * Looking for test storage... 00:09:46.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:46.034 19:11:30 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:46.034 19:11:30 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:09:46.034 19:11:30 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:46.034 19:11:30 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.034 19:11:30 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:46.034 19:11:30 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.034 19:11:30 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:46.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.034 --rc genhtml_branch_coverage=1 00:09:46.034 --rc genhtml_function_coverage=1 00:09:46.034 --rc genhtml_legend=1 00:09:46.034 --rc geninfo_all_blocks=1 00:09:46.034 --rc geninfo_unexecuted_blocks=1 00:09:46.034 00:09:46.034 ' 00:09:46.034 19:11:30 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:46.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.034 --rc genhtml_branch_coverage=1 00:09:46.034 --rc genhtml_function_coverage=1 00:09:46.034 --rc genhtml_legend=1 00:09:46.034 --rc geninfo_all_blocks=1 00:09:46.034 --rc geninfo_unexecuted_blocks=1 00:09:46.034 00:09:46.034 ' 00:09:46.034 19:11:30 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:46.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.034 --rc genhtml_branch_coverage=1 00:09:46.034 --rc genhtml_function_coverage=1 00:09:46.034 --rc genhtml_legend=1 00:09:46.034 --rc geninfo_all_blocks=1 00:09:46.034 --rc geninfo_unexecuted_blocks=1 00:09:46.034 00:09:46.034 ' 00:09:46.034 19:11:30 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:46.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.034 --rc genhtml_branch_coverage=1 00:09:46.034 --rc genhtml_function_coverage=1 00:09:46.034 --rc genhtml_legend=1 00:09:46.034 --rc geninfo_all_blocks=1 00:09:46.034 --rc geninfo_unexecuted_blocks=1 00:09:46.034 00:09:46.034 ' 00:09:46.034 19:11:30 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:46.606 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:46.606 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:46.606 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:46.606 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:46.606 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:46.606 19:11:30 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:46.606 19:11:30 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:46.606 19:11:30 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:46.606 19:11:30 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:46.606 19:11:30 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:46.607 19:11:30 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:46.607 19:11:30 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:46.607 19:11:30 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:46.607 19:11:30 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:46.866 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:47.127 Waiting for block devices as requested 00:09:47.127 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:47.127 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:47.127 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:47.387 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:52.677 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:52.677 19:11:36 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:52.677 19:11:36 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:52.677 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:52.937 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:52.937 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:53.198 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:53.458 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:53.458 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:53.458 19:11:37 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:53.458 19:11:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:53.719 19:11:37 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:53.719 19:11:37 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:53.719 19:11:37 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68496 00:09:53.719 19:11:37 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:53.719 19:11:37 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:53.719 19:11:37 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:53.719 19:11:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:53.719 19:11:37 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:53.719 19:11:37 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:53.719 19:11:37 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:53.719 19:11:37 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:53.719 19:11:37 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:53.719 19:11:37 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:53.719 19:11:37 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:53.719 19:11:37 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:53.719 19:11:37 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:53.719 19:11:37 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:53.719 Initializing NVMe Controllers 00:09:53.719 Attaching to 0000:00:10.0 00:09:53.719 Attaching to 0000:00:11.0 00:09:53.719 Attached to 0000:00:10.0 00:09:53.719 Attached to 0000:00:11.0 00:09:53.719 Initialization complete. Starting I/O... 00:09:53.719 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:53.719 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:53.719 00:09:55.106 QEMU NVMe Ctrl (12340 ): 1996 I/Os completed (+1996) 00:09:55.106 QEMU NVMe Ctrl (12341 ): 1996 I/Os completed (+1996) 00:09:55.106 00:09:56.050 QEMU NVMe Ctrl (12340 ): 4500 I/Os completed (+2504) 00:09:56.050 QEMU NVMe Ctrl (12341 ): 4509 I/Os completed (+2513) 00:09:56.050 00:09:56.995 QEMU NVMe Ctrl (12340 ): 7000 I/Os completed (+2500) 00:09:56.995 QEMU NVMe Ctrl (12341 ): 7009 I/Os completed (+2500) 00:09:56.995 00:09:57.935 QEMU NVMe Ctrl (12340 ): 9952 I/Os completed (+2952) 00:09:57.935 QEMU NVMe Ctrl (12341 ): 9961 I/Os completed (+2952) 00:09:57.935 00:09:58.870 QEMU NVMe Ctrl (12340 ): 13727 I/Os completed (+3775) 00:09:58.870 QEMU NVMe Ctrl (12341 ): 13751 I/Os completed (+3790) 00:09:58.870 00:09:59.810 19:11:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:59.810 19:11:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:59.810 19:11:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:59.810 [2024-12-16 19:11:43.850402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:59.810 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:59.810 [2024-12-16 19:11:43.851991] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.810 [2024-12-16 19:11:43.852078] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.810 [2024-12-16 19:11:43.852097] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.810 [2024-12-16 19:11:43.852119] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.810 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:59.810 [2024-12-16 19:11:43.854490] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.810 [2024-12-16 19:11:43.854568] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.810 [2024-12-16 19:11:43.854586] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.810 [2024-12-16 19:11:43.854611] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.810 19:11:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:59.810 19:11:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:59.810 [2024-12-16 19:11:43.874379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:59.810 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:59.810 [2024-12-16 19:11:43.875999] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.810 [2024-12-16 19:11:43.876210] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.810 [2024-12-16 19:11:43.876247] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.810 [2024-12-16 19:11:43.876268] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.810 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:59.810 [2024-12-16 19:11:43.878552] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.810 [2024-12-16 19:11:43.878681] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.810 [2024-12-16 19:11:43.878760] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.810 [2024-12-16 19:11:43.878793] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.810 19:11:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:59.810 19:11:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:59.810 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:59.810 EAL: Scan for (pci) bus failed. 00:09:59.810 19:11:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:59.810 19:11:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:59.810 19:11:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:59.810 00:09:59.810 19:11:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:59.810 19:11:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:59.810 19:11:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:59.810 19:11:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:59.810 19:11:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:59.810 Attaching to 0000:00:10.0 00:09:59.810 Attached to 0000:00:10.0 00:10:00.072 19:11:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:00.072 19:11:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:00.072 19:11:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:00.072 Attaching to 0000:00:11.0 00:10:00.072 Attached to 0000:00:11.0 00:10:01.015 QEMU NVMe Ctrl (12340 ): 2232 I/Os completed (+2232) 00:10:01.016 QEMU NVMe Ctrl (12341 ): 2039 I/Os completed (+2039) 00:10:01.016 00:10:01.955 QEMU NVMe Ctrl (12340 ): 5075 I/Os completed (+2843) 00:10:01.955 QEMU NVMe Ctrl (12341 ): 4894 I/Os completed (+2855) 00:10:01.955 00:10:02.917 QEMU NVMe Ctrl (12340 ): 8213 I/Os completed (+3138) 00:10:02.917 QEMU NVMe Ctrl (12341 ): 7993 I/Os completed (+3099) 00:10:02.917 00:10:03.860 QEMU NVMe Ctrl (12340 ): 10665 I/Os completed (+2452) 00:10:03.860 QEMU NVMe Ctrl (12341 ): 10450 I/Os completed (+2457) 00:10:03.860 00:10:04.858 QEMU NVMe Ctrl (12340 ): 13133 I/Os completed (+2468) 00:10:04.858 QEMU NVMe Ctrl (12341 ): 12920 I/Os completed (+2470) 00:10:04.858 00:10:05.801 QEMU NVMe Ctrl (12340 ): 15545 I/Os completed (+2412) 00:10:05.801 QEMU NVMe Ctrl (12341 ): 15332 I/Os completed (+2412) 00:10:05.801 00:10:06.742 QEMU NVMe Ctrl (12340 ): 18162 I/Os completed (+2617) 00:10:06.742 QEMU NVMe Ctrl (12341 ): 17949 I/Os completed (+2617) 00:10:06.742 00:10:08.128 QEMU NVMe Ctrl (12340 ): 20834 I/Os completed (+2672) 00:10:08.128 QEMU NVMe Ctrl (12341 ): 20621 I/Os completed (+2672) 00:10:08.128 00:10:08.700 QEMU NVMe Ctrl (12340 ): 23909 I/Os completed (+3075) 00:10:08.701 QEMU NVMe Ctrl (12341 ): 23693 I/Os completed (+3072) 00:10:08.701 00:10:10.086 QEMU NVMe Ctrl (12340 ): 27187 I/Os completed (+3278) 00:10:10.086 QEMU NVMe Ctrl (12341 ): 26929 I/Os completed (+3236) 00:10:10.086 00:10:11.028 QEMU NVMe Ctrl (12340 ): 30295 I/Os completed (+3108) 00:10:11.028 QEMU NVMe Ctrl (12341 ): 30041 I/Os completed (+3112) 00:10:11.028 00:10:11.971 QEMU NVMe Ctrl (12340 ): 33931 I/Os completed (+3636) 00:10:11.971 QEMU NVMe Ctrl (12341 ): 33693 I/Os completed (+3652) 00:10:11.971 00:10:11.971 19:11:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:11.971 19:11:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:11.971 19:11:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:11.971 19:11:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:11.971 [2024-12-16 19:11:56.207355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:11.971 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:11.971 [2024-12-16 19:11:56.208679] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.971 [2024-12-16 19:11:56.208861] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.971 [2024-12-16 19:11:56.208901] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.971 [2024-12-16 19:11:56.208962] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.971 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:11.971 [2024-12-16 19:11:56.211200] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.971 [2024-12-16 19:11:56.211317] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.971 [2024-12-16 19:11:56.211352] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.971 [2024-12-16 19:11:56.211504] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.971 19:11:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:11.971 19:11:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:11.971 [2024-12-16 19:11:56.232645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:11.971 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:11.971 [2024-12-16 19:11:56.233894] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.971 [2024-12-16 19:11:56.234013] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.972 [2024-12-16 19:11:56.234080] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.972 [2024-12-16 19:11:56.234112] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.972 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:11.972 [2024-12-16 19:11:56.236017] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.972 [2024-12-16 19:11:56.236116] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.972 [2024-12-16 19:11:56.236188] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.972 [2024-12-16 19:11:56.236221] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.972 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:11.972 EAL: Scan for (pci) bus failed. 00:10:11.972 19:11:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:11.972 19:11:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:11.972 19:11:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:11.972 19:11:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:11.972 19:11:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:12.233 19:11:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:12.233 19:11:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:12.233 19:11:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:12.233 19:11:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:12.233 19:11:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:12.233 Attaching to 0000:00:10.0 00:10:12.233 Attached to 0000:00:10.0 00:10:12.233 19:11:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:12.233 19:11:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:12.233 19:11:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:12.233 Attaching to 0000:00:11.0 00:10:12.233 Attached to 0000:00:11.0 00:10:12.804 QEMU NVMe Ctrl (12340 ): 2068 I/Os completed (+2068) 00:10:12.804 QEMU NVMe Ctrl (12341 ): 1839 I/Os completed (+1839) 00:10:12.804 00:10:13.747 QEMU NVMe Ctrl (12340 ): 5254 I/Os completed (+3186) 00:10:13.747 QEMU NVMe Ctrl (12341 ): 4940 I/Os completed (+3101) 00:10:13.747 00:10:15.122 QEMU NVMe Ctrl (12340 ): 8922 I/Os completed (+3668) 00:10:15.122 QEMU NVMe Ctrl (12341 ): 8421 I/Os completed (+3481) 00:10:15.122 00:10:16.056 QEMU NVMe Ctrl (12340 ): 12619 I/Os completed (+3697) 00:10:16.056 QEMU NVMe Ctrl (12341 ): 12150 I/Os completed (+3729) 00:10:16.056 00:10:16.988 QEMU NVMe Ctrl (12340 ): 16439 I/Os completed (+3820) 00:10:16.988 QEMU NVMe Ctrl (12341 ): 15974 I/Os completed (+3824) 00:10:16.988 00:10:17.921 QEMU NVMe Ctrl (12340 ): 20275 I/Os completed (+3836) 00:10:17.921 QEMU NVMe Ctrl (12341 ): 19814 I/Os completed (+3840) 00:10:17.921 00:10:18.854 QEMU NVMe Ctrl (12340 ): 24087 I/Os completed (+3812) 00:10:18.854 QEMU NVMe Ctrl (12341 ): 23638 I/Os completed (+3824) 00:10:18.854 00:10:19.789 QEMU NVMe Ctrl (12340 ): 27809 I/Os completed (+3722) 00:10:19.789 QEMU NVMe Ctrl (12341 ): 27339 I/Os completed (+3701) 00:10:19.789 00:10:20.729 QEMU NVMe Ctrl (12340 ): 30932 I/Os completed (+3123) 00:10:20.729 QEMU NVMe Ctrl (12341 ): 30448 I/Os completed (+3109) 00:10:20.729 00:10:22.112 QEMU NVMe Ctrl (12340 ): 33428 I/Os completed (+2496) 00:10:22.112 QEMU NVMe Ctrl (12341 ): 32944 I/Os completed (+2496) 00:10:22.112 00:10:23.056 QEMU NVMe Ctrl (12340 ): 35960 I/Os completed (+2532) 00:10:23.056 QEMU NVMe Ctrl (12341 ): 35476 I/Os completed (+2532) 00:10:23.056 00:10:23.709 QEMU NVMe Ctrl (12340 ): 38532 I/Os completed (+2572) 00:10:23.709 QEMU NVMe Ctrl (12341 ): 38059 I/Os completed (+2583) 00:10:23.709 00:10:24.280 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:24.280 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:24.280 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.280 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.280 [2024-12-16 19:12:08.477117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:24.280 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:24.280 [2024-12-16 19:12:08.478561] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.280 [2024-12-16 19:12:08.478642] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.280 [2024-12-16 19:12:08.478663] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.280 [2024-12-16 19:12:08.478683] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.280 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:24.280 [2024-12-16 19:12:08.480936] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.280 [2024-12-16 19:12:08.481012] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.280 [2024-12-16 19:12:08.481029] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.280 [2024-12-16 19:12:08.481046] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.280 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.280 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.280 [2024-12-16 19:12:08.504136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:24.280 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:24.280 [2024-12-16 19:12:08.505593] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.280 [2024-12-16 19:12:08.505657] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.280 [2024-12-16 19:12:08.505677] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.280 [2024-12-16 19:12:08.505693] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.280 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:24.280 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:24.280 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:24.280 [2024-12-16 19:12:08.507873] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.281 [2024-12-16 19:12:08.507929] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.281 [2024-12-16 19:12:08.507952] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.281 [2024-12-16 19:12:08.507970] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.541 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:24.541 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:24.541 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:24.541 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:24.541 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:24.541 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:24.541 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:24.541 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:24.541 Attaching to 0000:00:10.0 00:10:24.541 Attached to 0000:00:10.0 00:10:24.541 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:24.541 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:24.541 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:24.541 Attaching to 0000:00:11.0 00:10:24.541 Attached to 0000:00:11.0 00:10:24.541 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:24.541 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:24.541 [2024-12-16 19:12:08.853001] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:36.775 19:12:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:36.775 19:12:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:36.775 19:12:20 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.00 00:10:36.775 19:12:20 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.00 00:10:36.775 19:12:20 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:36.775 19:12:20 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.00 00:10:36.775 19:12:20 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.00 2 00:10:36.775 remove_attach_helper took 43.00s to complete (handling 2 nvme drive(s)) 19:12:20 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:43.359 19:12:26 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68496 00:10:43.359 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68496) - No such process 00:10:43.359 19:12:26 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68496 00:10:43.359 19:12:26 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:43.359 19:12:26 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:43.359 19:12:26 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:43.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.360 19:12:26 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69044 00:10:43.360 19:12:26 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:43.360 19:12:26 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69044 00:10:43.360 19:12:26 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 69044 ']' 00:10:43.360 19:12:26 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.360 19:12:26 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.360 19:12:26 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.360 19:12:26 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.360 19:12:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:43.360 19:12:26 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:43.360 [2024-12-16 19:12:26.945612] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:43.360 [2024-12-16 19:12:26.945757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69044 ] 00:10:43.360 [2024-12-16 19:12:27.109705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.360 [2024-12-16 19:12:27.231988] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.932 19:12:28 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.932 19:12:28 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:43.932 19:12:28 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:43.932 19:12:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.932 19:12:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:43.932 19:12:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.932 19:12:28 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:43.932 19:12:28 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:43.932 19:12:28 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:43.932 19:12:28 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:43.932 19:12:28 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:43.932 19:12:28 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:43.932 19:12:28 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:43.932 19:12:28 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:43.932 19:12:28 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:43.932 19:12:28 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:43.932 19:12:28 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:43.932 19:12:28 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:43.932 19:12:28 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.502 19:12:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.502 19:12:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.502 19:12:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:50.502 [2024-12-16 19:12:34.127393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:50.502 [2024-12-16 19:12:34.128662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.502 [2024-12-16 19:12:34.128701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.502 [2024-12-16 19:12:34.128715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.502 [2024-12-16 19:12:34.128738] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.502 [2024-12-16 19:12:34.128746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.502 [2024-12-16 19:12:34.128755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.502 [2024-12-16 19:12:34.128762] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.502 [2024-12-16 19:12:34.128770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.502 [2024-12-16 19:12:34.128777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.502 [2024-12-16 19:12:34.128789] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.502 [2024-12-16 19:12:34.128795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.502 [2024-12-16 19:12:34.128803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.502 [2024-12-16 19:12:34.527375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:50.502 [2024-12-16 19:12:34.528688] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.502 [2024-12-16 19:12:34.528816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.502 [2024-12-16 19:12:34.528833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.502 [2024-12-16 19:12:34.528848] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.502 [2024-12-16 19:12:34.528858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.502 [2024-12-16 19:12:34.528866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.502 [2024-12-16 19:12:34.528876] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.502 [2024-12-16 19:12:34.528882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.502 [2024-12-16 19:12:34.528890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.502 [2024-12-16 19:12:34.528897] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.502 [2024-12-16 19:12:34.528905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.502 [2024-12-16 19:12:34.528912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.502 19:12:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.502 19:12:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.502 19:12:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:50.502 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:50.760 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:50.760 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:50.760 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:02.959 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:02.959 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:02.959 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:02.959 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:02.959 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:02.959 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:02.959 19:12:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.959 19:12:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:02.959 19:12:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.959 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:02.959 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:02.959 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:02.959 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:02.959 [2024-12-16 19:12:46.927585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:02.959 [2024-12-16 19:12:46.929146] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.959 [2024-12-16 19:12:46.929273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.959 [2024-12-16 19:12:46.929339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.959 [2024-12-16 19:12:46.929379] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.959 [2024-12-16 19:12:46.929397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.959 [2024-12-16 19:12:46.929423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.959 [2024-12-16 19:12:46.929485] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.959 [2024-12-16 19:12:46.929505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.959 [2024-12-16 19:12:46.929529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.959 [2024-12-16 19:12:46.929555] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.959 [2024-12-16 19:12:46.929572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.959 [2024-12-16 19:12:46.929623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.959 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:02.959 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:02.959 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:02.959 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:02.959 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:02.959 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:02.959 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:02.959 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:02.959 19:12:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.959 19:12:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:02.959 19:12:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.959 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:02.959 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:03.218 [2024-12-16 19:12:47.327580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:03.218 [2024-12-16 19:12:47.328797] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.218 [2024-12-16 19:12:47.328825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.218 [2024-12-16 19:12:47.328838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.218 [2024-12-16 19:12:47.328852] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.218 [2024-12-16 19:12:47.328862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.218 [2024-12-16 19:12:47.328870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.218 [2024-12-16 19:12:47.328879] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.218 [2024-12-16 19:12:47.328886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.218 [2024-12-16 19:12:47.328894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.218 [2024-12-16 19:12:47.328901] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.218 [2024-12-16 19:12:47.328909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.218 [2024-12-16 19:12:47.328915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.218 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:03.218 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:03.218 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:03.218 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:03.218 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:03.218 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.218 19:12:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.218 19:12:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.218 19:12:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.218 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:03.218 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:03.476 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:03.476 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:03.476 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:03.476 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:03.476 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:03.476 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:03.476 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:03.476 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:03.476 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:03.476 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:03.476 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:15.675 19:12:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:15.675 19:12:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:15.675 19:12:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:15.675 19:12:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:15.675 19:12:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:15.675 19:12:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:15.675 19:12:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.675 19:12:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:15.675 19:12:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.675 19:12:59 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:15.675 19:12:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:15.675 19:12:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:15.675 19:12:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:15.675 19:12:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:15.675 19:12:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:15.675 19:12:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:15.675 19:12:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:15.675 19:12:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:15.675 19:12:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:15.675 19:12:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:15.675 19:12:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.675 19:12:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:15.675 19:12:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:15.675 19:12:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.675 19:12:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:15.675 19:12:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:15.675 [2024-12-16 19:12:59.927787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:15.675 [2024-12-16 19:12:59.929103] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.675 [2024-12-16 19:12:59.929141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.675 [2024-12-16 19:12:59.929153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.675 [2024-12-16 19:12:59.929182] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.675 [2024-12-16 19:12:59.929190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.675 [2024-12-16 19:12:59.929200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.675 [2024-12-16 19:12:59.929208] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.675 [2024-12-16 19:12:59.929217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.675 [2024-12-16 19:12:59.929224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.675 [2024-12-16 19:12:59.929232] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.675 [2024-12-16 19:12:59.929239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.675 [2024-12-16 19:12:59.929248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.241 [2024-12-16 19:13:00.327796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:16.241 [2024-12-16 19:13:00.329108] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.241 [2024-12-16 19:13:00.329140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.241 [2024-12-16 19:13:00.329154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.241 [2024-12-16 19:13:00.329185] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.241 [2024-12-16 19:13:00.329194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.241 [2024-12-16 19:13:00.329202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.241 [2024-12-16 19:13:00.329211] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.241 [2024-12-16 19:13:00.329218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.241 [2024-12-16 19:13:00.329228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.241 [2024-12-16 19:13:00.329235] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.241 [2024-12-16 19:13:00.329243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.241 [2024-12-16 19:13:00.329249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.241 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:16.241 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:16.241 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:16.241 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:16.241 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:16.241 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:16.241 19:13:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.241 19:13:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.241 19:13:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.241 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:16.241 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:16.241 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:16.241 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:16.241 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:16.499 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:16.499 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:16.499 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:16.499 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:16.499 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:16.499 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:16.499 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:16.499 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:28.694 19:13:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:28.694 19:13:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:28.694 19:13:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:28.694 19:13:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:28.694 19:13:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:28.694 19:13:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:28.694 19:13:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.694 19:13:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.694 19:13:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.694 19:13:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:28.694 19:13:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:28.694 19:13:12 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.68 00:11:28.694 19:13:12 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.68 00:11:28.694 19:13:12 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:28.694 19:13:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.68 00:11:28.694 19:13:12 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.68 2 00:11:28.694 remove_attach_helper took 44.68s to complete (handling 2 nvme drive(s)) 19:13:12 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:28.694 19:13:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.694 19:13:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.694 19:13:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.694 19:13:12 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:28.694 19:13:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.694 19:13:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.694 19:13:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.694 19:13:12 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:28.694 19:13:12 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:28.694 19:13:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:28.695 19:13:12 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:28.695 19:13:12 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:28.695 19:13:12 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:28.695 19:13:12 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:28.695 19:13:12 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:28.695 19:13:12 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:28.695 19:13:12 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:28.695 19:13:12 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:28.695 19:13:12 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:28.695 19:13:12 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:35.253 19:13:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:35.253 19:13:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:35.253 19:13:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:35.253 19:13:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:35.253 19:13:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:35.253 19:13:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:35.253 19:13:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:35.253 19:13:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:35.253 19:13:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:35.253 19:13:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:35.253 19:13:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.253 19:13:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:35.253 19:13:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.253 19:13:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.253 19:13:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:35.253 19:13:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:35.253 [2024-12-16 19:13:18.845530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:35.253 [2024-12-16 19:13:18.846615] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.253 [2024-12-16 19:13:18.846722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.253 [2024-12-16 19:13:18.846782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.253 [2024-12-16 19:13:18.846822] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.253 [2024-12-16 19:13:18.846968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.253 [2024-12-16 19:13:18.847056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.253 [2024-12-16 19:13:18.847083] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.253 [2024-12-16 19:13:18.847101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.253 [2024-12-16 19:13:18.847124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.253 [2024-12-16 19:13:18.847159] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.253 [2024-12-16 19:13:18.847186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.253 [2024-12-16 19:13:18.847257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.253 [2024-12-16 19:13:19.245525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:35.253 [2024-12-16 19:13:19.246525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.253 [2024-12-16 19:13:19.246617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.253 [2024-12-16 19:13:19.246681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.253 [2024-12-16 19:13:19.246738] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.253 [2024-12-16 19:13:19.246759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.253 [2024-12-16 19:13:19.246806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.253 [2024-12-16 19:13:19.246834] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.253 [2024-12-16 19:13:19.246955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.253 [2024-12-16 19:13:19.247015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.253 [2024-12-16 19:13:19.247039] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.253 [2024-12-16 19:13:19.247056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.253 [2024-12-16 19:13:19.247079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.253 19:13:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:35.253 19:13:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:35.253 19:13:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:35.253 19:13:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:35.253 19:13:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:35.253 19:13:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:35.253 19:13:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.253 19:13:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.253 19:13:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.253 19:13:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:35.253 19:13:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:35.253 19:13:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:35.253 19:13:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:35.253 19:13:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:35.253 19:13:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:35.253 19:13:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:35.253 19:13:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:35.253 19:13:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:35.253 19:13:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:35.512 19:13:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:35.512 19:13:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:35.512 19:13:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:47.710 19:13:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:47.710 19:13:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:47.710 19:13:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:47.710 19:13:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:47.710 19:13:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:47.710 19:13:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:47.710 19:13:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.710 19:13:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.710 19:13:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.710 19:13:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:47.710 19:13:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:47.710 19:13:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:47.710 19:13:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:47.710 19:13:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:47.711 19:13:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:47.711 19:13:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:47.711 19:13:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:47.711 19:13:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:47.711 19:13:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:47.711 19:13:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:47.711 19:13:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:47.711 19:13:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.711 19:13:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.711 19:13:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.711 [2024-12-16 19:13:31.745741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:47.711 [2024-12-16 19:13:31.746815] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.711 [2024-12-16 19:13:31.746919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.711 [2024-12-16 19:13:31.746975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.711 [2024-12-16 19:13:31.747016] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.711 [2024-12-16 19:13:31.747035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.711 [2024-12-16 19:13:31.747060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.711 [2024-12-16 19:13:31.747085] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.711 [2024-12-16 19:13:31.747103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.711 [2024-12-16 19:13:31.747227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.711 [2024-12-16 19:13:31.747260] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.711 [2024-12-16 19:13:31.747278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.711 [2024-12-16 19:13:31.747304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.711 19:13:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:47.711 19:13:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:47.969 [2024-12-16 19:13:32.245731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:47.969 [2024-12-16 19:13:32.250757] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.969 [2024-12-16 19:13:32.250859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.969 [2024-12-16 19:13:32.250974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.969 [2024-12-16 19:13:32.251007] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.969 [2024-12-16 19:13:32.251136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.969 [2024-12-16 19:13:32.251273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.969 [2024-12-16 19:13:32.251302] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.969 [2024-12-16 19:13:32.251318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.969 [2024-12-16 19:13:32.251343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.969 [2024-12-16 19:13:32.251368] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.969 [2024-12-16 19:13:32.251386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.969 [2024-12-16 19:13:32.251453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.969 19:13:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:47.969 19:13:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:47.969 19:13:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:47.969 19:13:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:47.969 19:13:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:47.969 19:13:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:47.969 19:13:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.969 19:13:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.969 19:13:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.969 19:13:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:47.969 19:13:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:48.226 19:13:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:48.226 19:13:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:48.226 19:13:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:48.226 19:13:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:48.226 19:13:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:48.226 19:13:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:48.226 19:13:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:48.226 19:13:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:48.226 19:13:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:48.226 19:13:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:48.226 19:13:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:00.425 19:13:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:00.425 19:13:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:00.425 19:13:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:00.425 19:13:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.425 19:13:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.425 19:13:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.425 19:13:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.425 19:13:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.425 19:13:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.425 19:13:44 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:00.425 19:13:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:00.425 19:13:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.425 19:13:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.425 19:13:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.425 19:13:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.425 19:13:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:00.425 19:13:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:00.425 19:13:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:00.425 19:13:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.425 19:13:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.425 19:13:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.425 19:13:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.425 19:13:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.425 [2024-12-16 19:13:44.645949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:00.425 [2024-12-16 19:13:44.646933] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.425 [2024-12-16 19:13:44.646964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.425 [2024-12-16 19:13:44.646975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.425 [2024-12-16 19:13:44.646994] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.425 [2024-12-16 19:13:44.647002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.425 [2024-12-16 19:13:44.647013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.425 [2024-12-16 19:13:44.647021] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.425 [2024-12-16 19:13:44.647031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.425 [2024-12-16 19:13:44.647038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.425 [2024-12-16 19:13:44.647046] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.425 [2024-12-16 19:13:44.647053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.425 [2024-12-16 19:13:44.647062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.425 19:13:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.425 19:13:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:00.425 19:13:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:00.991 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:00.991 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:00.991 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:00.991 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.991 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.991 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.991 19:13:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.991 19:13:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.991 19:13:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.991 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:00.991 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:00.991 [2024-12-16 19:13:45.245949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:00.991 [2024-12-16 19:13:45.246897] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.991 [2024-12-16 19:13:45.246925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.991 [2024-12-16 19:13:45.246937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.991 [2024-12-16 19:13:45.246950] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.991 [2024-12-16 19:13:45.246958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.991 [2024-12-16 19:13:45.246965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.991 [2024-12-16 19:13:45.246973] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.991 [2024-12-16 19:13:45.246980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.991 [2024-12-16 19:13:45.246988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.991 [2024-12-16 19:13:45.246995] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.991 [2024-12-16 19:13:45.247005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.991 [2024-12-16 19:13:45.247011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.557 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:01.557 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:01.557 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:01.557 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.558 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.558 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.558 19:13:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.558 19:13:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.558 19:13:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.558 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:01.558 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:01.558 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:01.558 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:01.558 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:01.816 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:01.816 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:01.816 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:01.816 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:01.816 19:13:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:01.816 19:13:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:01.816 19:13:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:01.816 19:13:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:14.057 19:13:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:14.057 19:13:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:14.057 19:13:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:14.057 19:13:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:14.058 19:13:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:14.058 19:13:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:14.058 19:13:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.058 19:13:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.058 19:13:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.058 19:13:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:14.058 19:13:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:14.058 19:13:58 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.31 00:12:14.058 19:13:58 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.31 00:12:14.058 19:13:58 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:14.058 19:13:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.31 00:12:14.058 19:13:58 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.31 2 00:12:14.058 remove_attach_helper took 45.31s to complete (handling 2 nvme drive(s)) 19:13:58 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:14.058 19:13:58 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69044 00:12:14.058 19:13:58 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 69044 ']' 00:12:14.058 19:13:58 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 69044 00:12:14.058 19:13:58 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:14.058 19:13:58 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.058 19:13:58 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69044 00:12:14.058 19:13:58 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.058 19:13:58 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.058 19:13:58 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69044' 00:12:14.058 killing process with pid 69044 00:12:14.058 19:13:58 sw_hotplug -- common/autotest_common.sh@973 -- # kill 69044 00:12:14.058 19:13:58 sw_hotplug -- common/autotest_common.sh@978 -- # wait 69044 00:12:15.437 19:13:59 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:15.437 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:16.008 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:16.008 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:16.008 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:16.008 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:16.270 00:12:16.270 real 2m30.169s 00:12:16.270 user 1m52.395s 00:12:16.270 sys 0m16.296s 00:12:16.270 19:14:00 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.270 19:14:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:16.270 ************************************ 00:12:16.270 END TEST sw_hotplug 00:12:16.270 ************************************ 00:12:16.270 19:14:00 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:16.270 19:14:00 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:16.270 19:14:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:16.270 19:14:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.270 19:14:00 -- common/autotest_common.sh@10 -- # set +x 00:12:16.270 ************************************ 00:12:16.271 START TEST nvme_xnvme 00:12:16.271 ************************************ 00:12:16.271 19:14:00 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:16.271 * Looking for test storage... 00:12:16.271 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:16.271 19:14:00 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:16.271 19:14:00 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:16.271 19:14:00 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:16.271 19:14:00 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.271 19:14:00 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:16.271 19:14:00 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.271 19:14:00 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:16.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.271 --rc genhtml_branch_coverage=1 00:12:16.271 --rc genhtml_function_coverage=1 00:12:16.271 --rc genhtml_legend=1 00:12:16.271 --rc geninfo_all_blocks=1 00:12:16.271 --rc geninfo_unexecuted_blocks=1 00:12:16.271 00:12:16.271 ' 00:12:16.271 19:14:00 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:16.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.271 --rc genhtml_branch_coverage=1 00:12:16.271 --rc genhtml_function_coverage=1 00:12:16.271 --rc genhtml_legend=1 00:12:16.271 --rc geninfo_all_blocks=1 00:12:16.271 --rc geninfo_unexecuted_blocks=1 00:12:16.271 00:12:16.271 ' 00:12:16.271 19:14:00 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:16.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.271 --rc genhtml_branch_coverage=1 00:12:16.271 --rc genhtml_function_coverage=1 00:12:16.271 --rc genhtml_legend=1 00:12:16.271 --rc geninfo_all_blocks=1 00:12:16.271 --rc geninfo_unexecuted_blocks=1 00:12:16.271 00:12:16.271 ' 00:12:16.271 19:14:00 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:16.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.271 --rc genhtml_branch_coverage=1 00:12:16.271 --rc genhtml_function_coverage=1 00:12:16.271 --rc genhtml_legend=1 00:12:16.271 --rc geninfo_all_blocks=1 00:12:16.271 --rc geninfo_unexecuted_blocks=1 00:12:16.271 00:12:16.271 ' 00:12:16.271 19:14:00 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:16.271 19:14:00 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:16.271 19:14:00 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:16.271 19:14:00 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:16.271 19:14:00 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:16.271 19:14:00 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:16.271 19:14:00 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:16.271 19:14:00 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:16.271 19:14:00 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:16.271 19:14:00 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:16.271 19:14:00 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:16.272 19:14:00 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:16.272 19:14:00 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:16.272 19:14:00 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:16.272 19:14:00 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:16.272 19:14:00 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:16.272 19:14:00 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:16.272 19:14:00 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:16.272 19:14:00 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:16.272 19:14:00 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:16.272 19:14:00 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:16.272 19:14:00 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:16.272 19:14:00 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:16.272 19:14:00 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:16.272 19:14:00 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:16.272 19:14:00 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:16.272 19:14:00 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:16.272 19:14:00 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:16.272 19:14:00 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:16.272 19:14:00 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:16.272 19:14:00 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:16.536 19:14:00 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:16.536 19:14:00 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:16.536 19:14:00 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:16.536 19:14:00 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:16.536 19:14:00 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:16.536 19:14:00 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:16.536 19:14:00 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:16.536 19:14:00 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:16.536 19:14:00 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:16.536 19:14:00 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:16.536 19:14:00 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:16.536 19:14:00 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:16.536 19:14:00 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:16.536 19:14:00 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:16.536 19:14:00 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:16.536 #define SPDK_CONFIG_H 00:12:16.536 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:16.536 #define SPDK_CONFIG_APPS 1 00:12:16.536 #define SPDK_CONFIG_ARCH native 00:12:16.536 #define SPDK_CONFIG_ASAN 1 00:12:16.536 #undef SPDK_CONFIG_AVAHI 00:12:16.536 #undef SPDK_CONFIG_CET 00:12:16.536 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:16.536 #define SPDK_CONFIG_COVERAGE 1 00:12:16.536 #define SPDK_CONFIG_CROSS_PREFIX 00:12:16.536 #undef SPDK_CONFIG_CRYPTO 00:12:16.536 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:16.536 #undef SPDK_CONFIG_CUSTOMOCF 00:12:16.536 #undef SPDK_CONFIG_DAOS 00:12:16.536 #define SPDK_CONFIG_DAOS_DIR 00:12:16.536 #define SPDK_CONFIG_DEBUG 1 00:12:16.536 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:16.536 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:16.536 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:16.536 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:16.536 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:16.536 #undef SPDK_CONFIG_DPDK_UADK 00:12:16.536 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:16.536 #define SPDK_CONFIG_EXAMPLES 1 00:12:16.536 #undef SPDK_CONFIG_FC 00:12:16.536 #define SPDK_CONFIG_FC_PATH 00:12:16.536 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:16.536 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:16.536 #define SPDK_CONFIG_FSDEV 1 00:12:16.536 #undef SPDK_CONFIG_FUSE 00:12:16.536 #undef SPDK_CONFIG_FUZZER 00:12:16.536 #define SPDK_CONFIG_FUZZER_LIB 00:12:16.536 #undef SPDK_CONFIG_GOLANG 00:12:16.536 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:16.536 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:16.536 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:16.536 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:16.536 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:16.536 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:16.536 #undef SPDK_CONFIG_HAVE_LZ4 00:12:16.536 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:16.536 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:16.536 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:16.536 #define SPDK_CONFIG_IDXD 1 00:12:16.536 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:16.536 #undef SPDK_CONFIG_IPSEC_MB 00:12:16.536 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:16.536 #define SPDK_CONFIG_ISAL 1 00:12:16.536 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:16.536 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:16.536 #define SPDK_CONFIG_LIBDIR 00:12:16.536 #undef SPDK_CONFIG_LTO 00:12:16.536 #define SPDK_CONFIG_MAX_LCORES 128 00:12:16.536 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:16.536 #define SPDK_CONFIG_NVME_CUSE 1 00:12:16.536 #undef SPDK_CONFIG_OCF 00:12:16.536 #define SPDK_CONFIG_OCF_PATH 00:12:16.536 #define SPDK_CONFIG_OPENSSL_PATH 00:12:16.536 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:16.536 #define SPDK_CONFIG_PGO_DIR 00:12:16.536 #undef SPDK_CONFIG_PGO_USE 00:12:16.536 #define SPDK_CONFIG_PREFIX /usr/local 00:12:16.536 #undef SPDK_CONFIG_RAID5F 00:12:16.536 #undef SPDK_CONFIG_RBD 00:12:16.536 #define SPDK_CONFIG_RDMA 1 00:12:16.536 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:16.536 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:16.536 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:16.536 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:16.536 #define SPDK_CONFIG_SHARED 1 00:12:16.536 #undef SPDK_CONFIG_SMA 00:12:16.536 #define SPDK_CONFIG_TESTS 1 00:12:16.536 #undef SPDK_CONFIG_TSAN 00:12:16.536 #define SPDK_CONFIG_UBLK 1 00:12:16.536 #define SPDK_CONFIG_UBSAN 1 00:12:16.536 #undef SPDK_CONFIG_UNIT_TESTS 00:12:16.536 #undef SPDK_CONFIG_URING 00:12:16.536 #define SPDK_CONFIG_URING_PATH 00:12:16.536 #undef SPDK_CONFIG_URING_ZNS 00:12:16.536 #undef SPDK_CONFIG_USDT 00:12:16.536 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:16.536 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:16.536 #undef SPDK_CONFIG_VFIO_USER 00:12:16.536 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:16.536 #define SPDK_CONFIG_VHOST 1 00:12:16.536 #define SPDK_CONFIG_VIRTIO 1 00:12:16.536 #undef SPDK_CONFIG_VTUNE 00:12:16.536 #define SPDK_CONFIG_VTUNE_DIR 00:12:16.536 #define SPDK_CONFIG_WERROR 1 00:12:16.536 #define SPDK_CONFIG_WPDK_DIR 00:12:16.536 #define SPDK_CONFIG_XNVME 1 00:12:16.536 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:16.536 19:14:00 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:16.536 19:14:00 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:16.536 19:14:00 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:16.536 19:14:00 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.536 19:14:00 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.536 19:14:00 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.536 19:14:00 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.536 19:14:00 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.536 19:14:00 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.536 19:14:00 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:16.536 19:14:00 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.536 19:14:00 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:16.536 19:14:00 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:16.536 19:14:00 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:16.536 19:14:00 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:16.536 19:14:00 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:16.536 19:14:00 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:16.536 19:14:00 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:16.536 19:14:00 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:16.536 19:14:00 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:16.536 19:14:00 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:16.536 19:14:00 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:16.536 19:14:00 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:16.536 19:14:00 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:16.536 19:14:00 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:16.536 19:14:00 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:16.536 19:14:00 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:16.536 19:14:00 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:16.536 19:14:00 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:16.536 19:14:00 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:16.536 19:14:00 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:16.536 19:14:00 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:16.536 19:14:00 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:16.536 19:14:00 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:16.536 19:14:00 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:16.536 19:14:00 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:16.536 19:14:00 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:16.536 19:14:00 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:16.536 19:14:00 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:16.537 19:14:00 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70394 ]] 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70394 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.qR6ebJ 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.qR6ebJ/tests/xnvme /tmp/spdk.qR6ebJ 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13971681280 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5596467200 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13971681280 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5596467200 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265249792 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265397248 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:16.538 19:14:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98871046144 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=831733760 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:16.539 * Looking for test storage... 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13971681280 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:16.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:16.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.539 --rc genhtml_branch_coverage=1 00:12:16.539 --rc genhtml_function_coverage=1 00:12:16.539 --rc genhtml_legend=1 00:12:16.539 --rc geninfo_all_blocks=1 00:12:16.539 --rc geninfo_unexecuted_blocks=1 00:12:16.539 00:12:16.539 ' 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:16.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.539 --rc genhtml_branch_coverage=1 00:12:16.539 --rc genhtml_function_coverage=1 00:12:16.539 --rc genhtml_legend=1 00:12:16.539 --rc geninfo_all_blocks=1 00:12:16.539 --rc geninfo_unexecuted_blocks=1 00:12:16.539 00:12:16.539 ' 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:16.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.539 --rc genhtml_branch_coverage=1 00:12:16.539 --rc genhtml_function_coverage=1 00:12:16.539 --rc genhtml_legend=1 00:12:16.539 --rc geninfo_all_blocks=1 00:12:16.539 --rc geninfo_unexecuted_blocks=1 00:12:16.539 00:12:16.539 ' 00:12:16.539 19:14:00 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:16.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.539 --rc genhtml_branch_coverage=1 00:12:16.539 --rc genhtml_function_coverage=1 00:12:16.539 --rc genhtml_legend=1 00:12:16.539 --rc geninfo_all_blocks=1 00:12:16.539 --rc geninfo_unexecuted_blocks=1 00:12:16.539 00:12:16.539 ' 00:12:16.539 19:14:00 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.539 19:14:00 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.539 19:14:00 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.539 19:14:00 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.539 19:14:00 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.539 19:14:00 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:16.539 19:14:00 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.539 19:14:00 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:16.539 19:14:00 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:16.539 19:14:00 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:16.539 19:14:00 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:16.540 19:14:00 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:16.540 19:14:00 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:16.540 19:14:00 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:16.540 19:14:00 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:16.540 19:14:00 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:16.540 19:14:00 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:16.540 19:14:00 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:16.540 19:14:00 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:16.540 19:14:00 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:16.540 19:14:00 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:16.540 19:14:00 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:16.540 19:14:00 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:16.540 19:14:00 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:16.540 19:14:00 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:16.540 19:14:00 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:16.540 19:14:00 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:16.540 19:14:00 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:16.540 19:14:00 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:16.801 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:17.062 Waiting for block devices as requested 00:12:17.062 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:17.326 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:17.326 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:17.326 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:22.617 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:22.617 19:14:06 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:22.877 19:14:07 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:22.877 19:14:07 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:23.138 19:14:07 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:23.138 19:14:07 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:23.139 19:14:07 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:23.139 19:14:07 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:23.139 19:14:07 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:23.139 No valid GPT data, bailing 00:12:23.139 19:14:07 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:23.139 19:14:07 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:23.139 19:14:07 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:23.139 19:14:07 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:23.139 19:14:07 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:23.139 19:14:07 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:23.139 19:14:07 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:23.139 19:14:07 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:23.139 19:14:07 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:23.139 19:14:07 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:23.139 19:14:07 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:23.139 19:14:07 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:23.139 19:14:07 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:23.139 19:14:07 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:23.139 19:14:07 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:23.139 19:14:07 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:23.139 19:14:07 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:23.139 19:14:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:23.139 19:14:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.139 19:14:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:23.139 ************************************ 00:12:23.139 START TEST xnvme_rpc 00:12:23.139 ************************************ 00:12:23.139 19:14:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:23.139 19:14:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:23.139 19:14:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:23.139 19:14:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:23.139 19:14:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:23.139 19:14:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70790 00:12:23.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.139 19:14:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70790 00:12:23.139 19:14:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:23.139 19:14:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70790 ']' 00:12:23.139 19:14:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.139 19:14:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.139 19:14:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.139 19:14:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.139 19:14:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.401 [2024-12-16 19:14:07.508147] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:23.401 [2024-12-16 19:14:07.508303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70790 ] 00:12:23.401 [2024-12-16 19:14:07.674665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.662 [2024-12-16 19:14:07.823471] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.607 xnvme_bdev 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70790 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70790 ']' 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70790 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70790 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:24.607 killing process with pid 70790 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70790' 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70790 00:12:24.607 19:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70790 00:12:26.525 00:12:26.525 real 0m3.228s 00:12:26.525 user 0m3.101s 00:12:26.525 sys 0m0.589s 00:12:26.525 ************************************ 00:12:26.525 END TEST xnvme_rpc 00:12:26.525 ************************************ 00:12:26.525 19:14:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.525 19:14:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.525 19:14:10 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:26.525 19:14:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:26.525 19:14:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.525 19:14:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.525 ************************************ 00:12:26.525 START TEST xnvme_bdevperf 00:12:26.525 ************************************ 00:12:26.525 19:14:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:26.525 19:14:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:26.525 19:14:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:26.525 19:14:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:26.525 19:14:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:26.525 19:14:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:26.525 19:14:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:26.525 19:14:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:26.525 { 00:12:26.525 "subsystems": [ 00:12:26.525 { 00:12:26.525 "subsystem": "bdev", 00:12:26.525 "config": [ 00:12:26.525 { 00:12:26.525 "params": { 00:12:26.525 "io_mechanism": "libaio", 00:12:26.525 "conserve_cpu": false, 00:12:26.525 "filename": "/dev/nvme0n1", 00:12:26.525 "name": "xnvme_bdev" 00:12:26.525 }, 00:12:26.525 "method": "bdev_xnvme_create" 00:12:26.525 }, 00:12:26.525 { 00:12:26.525 "method": "bdev_wait_for_examine" 00:12:26.525 } 00:12:26.525 ] 00:12:26.525 } 00:12:26.525 ] 00:12:26.525 } 00:12:26.525 [2024-12-16 19:14:10.805474] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:26.525 [2024-12-16 19:14:10.805895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70864 ] 00:12:26.787 [2024-12-16 19:14:10.970957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.787 [2024-12-16 19:14:11.109095] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.360 Running I/O for 5 seconds... 00:12:29.247 26290.00 IOPS, 102.70 MiB/s [2024-12-16T19:14:14.545Z] 26634.00 IOPS, 104.04 MiB/s [2024-12-16T19:14:15.488Z] 26302.67 IOPS, 102.74 MiB/s [2024-12-16T19:14:16.876Z] 26343.75 IOPS, 102.91 MiB/s 00:12:32.522 Latency(us) 00:12:32.522 [2024-12-16T19:14:16.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.522 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:32.522 xnvme_bdev : 5.00 26383.59 103.06 0.00 0.00 2420.29 482.07 6654.42 00:12:32.522 [2024-12-16T19:14:16.876Z] =================================================================================================================== 00:12:32.522 [2024-12-16T19:14:16.876Z] Total : 26383.59 103.06 0.00 0.00 2420.29 482.07 6654.42 00:12:33.095 19:14:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:33.095 19:14:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:33.095 19:14:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:33.095 19:14:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:33.095 19:14:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:33.095 { 00:12:33.095 "subsystems": [ 00:12:33.095 { 00:12:33.095 "subsystem": "bdev", 00:12:33.095 "config": [ 00:12:33.095 { 00:12:33.095 "params": { 00:12:33.095 "io_mechanism": "libaio", 00:12:33.095 "conserve_cpu": false, 00:12:33.095 "filename": "/dev/nvme0n1", 00:12:33.095 "name": "xnvme_bdev" 00:12:33.095 }, 00:12:33.095 "method": "bdev_xnvme_create" 00:12:33.095 }, 00:12:33.095 { 00:12:33.095 "method": "bdev_wait_for_examine" 00:12:33.095 } 00:12:33.095 ] 00:12:33.095 } 00:12:33.095 ] 00:12:33.095 } 00:12:33.355 [2024-12-16 19:14:17.453525] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:33.355 [2024-12-16 19:14:17.453660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70954 ] 00:12:33.355 [2024-12-16 19:14:17.618453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.616 [2024-12-16 19:14:17.758934] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.878 Running I/O for 5 seconds... 00:12:35.807 34336.00 IOPS, 134.12 MiB/s [2024-12-16T19:14:21.547Z] 32671.00 IOPS, 127.62 MiB/s [2024-12-16T19:14:22.120Z] 32327.33 IOPS, 126.28 MiB/s [2024-12-16T19:14:23.506Z] 32196.25 IOPS, 125.77 MiB/s [2024-12-16T19:14:23.506Z] 31805.40 IOPS, 124.24 MiB/s 00:12:39.152 Latency(us) 00:12:39.152 [2024-12-16T19:14:23.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.152 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:39.152 xnvme_bdev : 5.01 31768.45 124.10 0.00 0.00 2009.22 294.60 8368.44 00:12:39.152 [2024-12-16T19:14:23.506Z] =================================================================================================================== 00:12:39.152 [2024-12-16T19:14:23.506Z] Total : 31768.45 124.10 0.00 0.00 2009.22 294.60 8368.44 00:12:39.721 00:12:39.721 real 0m13.284s 00:12:39.721 user 0m5.161s 00:12:39.721 sys 0m6.662s 00:12:39.721 19:14:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.721 19:14:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:39.721 ************************************ 00:12:39.721 END TEST xnvme_bdevperf 00:12:39.721 ************************************ 00:12:39.721 19:14:24 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:39.721 19:14:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:39.721 19:14:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.721 19:14:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:39.721 ************************************ 00:12:39.980 START TEST xnvme_fio_plugin 00:12:39.980 ************************************ 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:39.980 19:14:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:39.980 { 00:12:39.980 "subsystems": [ 00:12:39.980 { 00:12:39.980 "subsystem": "bdev", 00:12:39.980 "config": [ 00:12:39.980 { 00:12:39.980 "params": { 00:12:39.980 "io_mechanism": "libaio", 00:12:39.980 "conserve_cpu": false, 00:12:39.980 "filename": "/dev/nvme0n1", 00:12:39.980 "name": "xnvme_bdev" 00:12:39.980 }, 00:12:39.980 "method": "bdev_xnvme_create" 00:12:39.980 }, 00:12:39.980 { 00:12:39.980 "method": "bdev_wait_for_examine" 00:12:39.980 } 00:12:39.980 ] 00:12:39.980 } 00:12:39.980 ] 00:12:39.980 } 00:12:39.980 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:39.980 fio-3.35 00:12:39.981 Starting 1 thread 00:12:46.578 00:12:46.578 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71073: Mon Dec 16 19:14:30 2024 00:12:46.578 read: IOPS=29.9k, BW=117MiB/s (123MB/s)(585MiB/5001msec) 00:12:46.578 slat (usec): min=4, max=1977, avg=24.63, stdev=105.21 00:12:46.578 clat (usec): min=108, max=5776, avg=1466.01, stdev=553.03 00:12:46.578 lat (usec): min=203, max=5819, avg=1490.64, stdev=541.15 00:12:46.578 clat percentiles (usec): 00:12:46.578 | 1.00th=[ 289], 5.00th=[ 594], 10.00th=[ 766], 20.00th=[ 1004], 00:12:46.578 | 30.00th=[ 1188], 40.00th=[ 1336], 50.00th=[ 1467], 60.00th=[ 1598], 00:12:46.578 | 70.00th=[ 1729], 80.00th=[ 1876], 90.00th=[ 2114], 95.00th=[ 2376], 00:12:46.578 | 99.00th=[ 3064], 99.50th=[ 3326], 99.90th=[ 4015], 99.95th=[ 4293], 00:12:46.578 | 99.99th=[ 4948] 00:12:46.578 bw ( KiB/s): min=112528, max=127584, per=99.96%, avg=119704.00, stdev=4443.86, samples=9 00:12:46.578 iops : min=28132, max=31896, avg=29926.00, stdev=1110.97, samples=9 00:12:46.578 lat (usec) : 250=0.61%, 500=2.58%, 750=6.15%, 1000=10.41% 00:12:46.578 lat (msec) : 2=66.19%, 4=13.96%, 10=0.11% 00:12:46.578 cpu : usr=37.70%, sys=53.76%, ctx=9, majf=0, minf=764 00:12:46.578 IO depths : 1=0.4%, 2=1.1%, 4=3.0%, 8=8.2%, 16=23.2%, 32=61.9%, >=64=2.1% 00:12:46.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.578 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:12:46.578 issued rwts: total=149727,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:46.578 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:46.578 00:12:46.578 Run status group 0 (all jobs): 00:12:46.578 READ: bw=117MiB/s (123MB/s), 117MiB/s-117MiB/s (123MB/s-123MB/s), io=585MiB (613MB), run=5001-5001msec 00:12:46.838 ----------------------------------------------------- 00:12:46.838 Suppressions used: 00:12:46.838 count bytes template 00:12:46.838 1 11 /usr/src/fio/parse.c 00:12:46.838 1 8 libtcmalloc_minimal.so 00:12:46.838 1 904 libcrypto.so 00:12:46.838 ----------------------------------------------------- 00:12:46.838 00:12:46.838 19:14:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:46.838 19:14:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:46.838 19:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:46.838 19:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:46.838 19:14:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:46.838 19:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:46.838 19:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:46.838 19:14:31 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:46.838 19:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:46.838 19:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:46.838 19:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:46.838 19:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:46.838 19:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:46.838 19:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:46.838 19:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:46.838 19:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:46.838 19:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:46.838 19:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:46.838 19:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:46.838 19:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:46.838 19:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:46.838 { 00:12:46.838 "subsystems": [ 00:12:46.838 { 00:12:46.838 "subsystem": "bdev", 00:12:46.838 "config": [ 00:12:46.838 { 00:12:46.838 "params": { 00:12:46.838 "io_mechanism": "libaio", 00:12:46.838 "conserve_cpu": false, 00:12:46.838 "filename": "/dev/nvme0n1", 00:12:46.838 "name": "xnvme_bdev" 00:12:46.838 }, 00:12:46.838 "method": "bdev_xnvme_create" 00:12:46.838 }, 00:12:46.838 { 00:12:46.839 "method": "bdev_wait_for_examine" 00:12:46.839 } 00:12:46.839 ] 00:12:46.839 } 00:12:46.839 ] 00:12:46.839 } 00:12:47.099 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:47.099 fio-3.35 00:12:47.099 Starting 1 thread 00:12:53.689 00:12:53.689 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71171: Mon Dec 16 19:14:37 2024 00:12:53.689 write: IOPS=31.5k, BW=123MiB/s (129MB/s)(616MiB/5001msec); 0 zone resets 00:12:53.689 slat (usec): min=4, max=1842, avg=25.54, stdev=91.42 00:12:53.689 clat (usec): min=106, max=4881, avg=1327.09, stdev=572.67 00:12:53.689 lat (usec): min=184, max=4887, avg=1352.62, stdev=564.69 00:12:53.689 clat percentiles (usec): 00:12:53.689 | 1.00th=[ 265], 5.00th=[ 461], 10.00th=[ 627], 20.00th=[ 840], 00:12:53.689 | 30.00th=[ 1004], 40.00th=[ 1139], 50.00th=[ 1287], 60.00th=[ 1434], 00:12:53.689 | 70.00th=[ 1582], 80.00th=[ 1778], 90.00th=[ 2040], 95.00th=[ 2278], 00:12:53.689 | 99.00th=[ 3032], 99.50th=[ 3326], 99.90th=[ 3949], 99.95th=[ 4113], 00:12:53.689 | 99.99th=[ 4424] 00:12:53.689 bw ( KiB/s): min=115864, max=142552, per=100.00%, avg=126959.11, stdev=7600.70, samples=9 00:12:53.689 iops : min=28966, max=35638, avg=31739.78, stdev=1900.17, samples=9 00:12:53.689 lat (usec) : 250=0.83%, 500=5.27%, 750=9.10%, 1000=14.70% 00:12:53.689 lat (msec) : 2=58.96%, 4=11.07%, 10=0.08% 00:12:53.689 cpu : usr=31.46%, sys=56.68%, ctx=31, majf=0, minf=765 00:12:53.689 IO depths : 1=0.3%, 2=0.9%, 4=2.8%, 8=8.4%, 16=23.8%, 32=61.8%, >=64=2.0% 00:12:53.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.689 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:12:53.689 issued rwts: total=0,157615,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:53.689 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:53.689 00:12:53.689 Run status group 0 (all jobs): 00:12:53.689 WRITE: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=616MiB (646MB), run=5001-5001msec 00:12:53.950 ----------------------------------------------------- 00:12:53.950 Suppressions used: 00:12:53.950 count bytes template 00:12:53.950 1 11 /usr/src/fio/parse.c 00:12:53.950 1 8 libtcmalloc_minimal.so 00:12:53.950 1 904 libcrypto.so 00:12:53.950 ----------------------------------------------------- 00:12:53.950 00:12:53.950 00:12:53.950 real 0m14.138s 00:12:53.950 user 0m6.429s 00:12:53.950 sys 0m6.295s 00:12:53.950 ************************************ 00:12:53.950 END TEST xnvme_fio_plugin 00:12:53.950 ************************************ 00:12:53.950 19:14:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.950 19:14:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:53.950 19:14:38 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:53.950 19:14:38 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:53.950 19:14:38 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:53.950 19:14:38 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:53.950 19:14:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:53.951 19:14:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.951 19:14:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:53.951 ************************************ 00:12:53.951 START TEST xnvme_rpc 00:12:53.951 ************************************ 00:12:53.951 19:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:53.951 19:14:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:53.951 19:14:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:53.951 19:14:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:53.951 19:14:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:53.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.951 19:14:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71252 00:12:53.951 19:14:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71252 00:12:53.951 19:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71252 ']' 00:12:53.951 19:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.951 19:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.951 19:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.951 19:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.951 19:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.951 19:14:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:54.211 [2024-12-16 19:14:38.376430] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:54.211 [2024-12-16 19:14:38.376579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71252 ] 00:12:54.211 [2024-12-16 19:14:38.537758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.472 [2024-12-16 19:14:38.659237] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.044 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.044 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:55.044 19:14:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:55.044 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.044 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.044 xnvme_bdev 00:12:55.044 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.044 19:14:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:55.044 19:14:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:55.044 19:14:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:55.044 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.044 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.305 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71252 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71252 ']' 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71252 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71252 00:12:55.306 killing process with pid 71252 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71252' 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71252 00:12:55.306 19:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71252 00:12:57.222 ************************************ 00:12:57.222 END TEST xnvme_rpc 00:12:57.222 ************************************ 00:12:57.222 00:12:57.222 real 0m2.939s 00:12:57.222 user 0m2.885s 00:12:57.222 sys 0m0.514s 00:12:57.222 19:14:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.222 19:14:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.222 19:14:41 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:57.222 19:14:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:57.222 19:14:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.222 19:14:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.222 ************************************ 00:12:57.222 START TEST xnvme_bdevperf 00:12:57.222 ************************************ 00:12:57.222 19:14:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:57.222 19:14:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:57.222 19:14:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:57.222 19:14:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:57.222 19:14:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:57.222 19:14:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:57.222 19:14:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:57.222 19:14:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:57.222 { 00:12:57.222 "subsystems": [ 00:12:57.222 { 00:12:57.222 "subsystem": "bdev", 00:12:57.222 "config": [ 00:12:57.222 { 00:12:57.222 "params": { 00:12:57.222 "io_mechanism": "libaio", 00:12:57.222 "conserve_cpu": true, 00:12:57.222 "filename": "/dev/nvme0n1", 00:12:57.222 "name": "xnvme_bdev" 00:12:57.222 }, 00:12:57.222 "method": "bdev_xnvme_create" 00:12:57.222 }, 00:12:57.222 { 00:12:57.222 "method": "bdev_wait_for_examine" 00:12:57.222 } 00:12:57.222 ] 00:12:57.222 } 00:12:57.222 ] 00:12:57.222 } 00:12:57.222 [2024-12-16 19:14:41.365845] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:57.222 [2024-12-16 19:14:41.366189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71326 ] 00:12:57.222 [2024-12-16 19:14:41.530981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.483 [2024-12-16 19:14:41.652317] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.745 Running I/O for 5 seconds... 00:12:59.629 31677.00 IOPS, 123.74 MiB/s [2024-12-16T19:14:45.368Z] 30954.50 IOPS, 120.92 MiB/s [2024-12-16T19:14:46.311Z] 31094.33 IOPS, 121.46 MiB/s [2024-12-16T19:14:47.255Z] 30395.75 IOPS, 118.73 MiB/s 00:13:02.901 Latency(us) 00:13:02.901 [2024-12-16T19:14:47.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.901 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:02.901 xnvme_bdev : 5.00 30134.53 117.71 0.00 0.00 2119.10 460.01 9225.45 00:13:02.901 [2024-12-16T19:14:47.255Z] =================================================================================================================== 00:13:02.901 [2024-12-16T19:14:47.255Z] Total : 30134.53 117.71 0.00 0.00 2119.10 460.01 9225.45 00:13:03.474 19:14:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:03.474 19:14:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:03.474 19:14:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:03.474 19:14:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:03.474 19:14:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:03.735 { 00:13:03.735 "subsystems": [ 00:13:03.735 { 00:13:03.735 "subsystem": "bdev", 00:13:03.735 "config": [ 00:13:03.735 { 00:13:03.735 "params": { 00:13:03.735 "io_mechanism": "libaio", 00:13:03.735 "conserve_cpu": true, 00:13:03.735 "filename": "/dev/nvme0n1", 00:13:03.735 "name": "xnvme_bdev" 00:13:03.735 }, 00:13:03.735 "method": "bdev_xnvme_create" 00:13:03.735 }, 00:13:03.735 { 00:13:03.735 "method": "bdev_wait_for_examine" 00:13:03.735 } 00:13:03.735 ] 00:13:03.735 } 00:13:03.735 ] 00:13:03.735 } 00:13:03.735 [2024-12-16 19:14:47.871927] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:03.735 [2024-12-16 19:14:47.872069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71401 ] 00:13:03.735 [2024-12-16 19:14:48.037693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.996 [2024-12-16 19:14:48.158628] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.257 Running I/O for 5 seconds... 00:13:06.239 32889.00 IOPS, 128.47 MiB/s [2024-12-16T19:14:51.537Z] 33763.50 IOPS, 131.89 MiB/s [2024-12-16T19:14:52.923Z] 32624.00 IOPS, 127.44 MiB/s [2024-12-16T19:14:53.495Z] 31986.25 IOPS, 124.95 MiB/s 00:13:09.141 Latency(us) 00:13:09.141 [2024-12-16T19:14:53.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.141 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:09.141 xnvme_bdev : 5.00 31809.97 124.26 0.00 0.00 2007.46 291.45 7713.08 00:13:09.141 [2024-12-16T19:14:53.495Z] =================================================================================================================== 00:13:09.141 [2024-12-16T19:14:53.495Z] Total : 31809.97 124.26 0.00 0.00 2007.46 291.45 7713.08 00:13:10.085 00:13:10.085 real 0m13.000s 00:13:10.085 user 0m5.179s 00:13:10.085 sys 0m6.235s 00:13:10.085 19:14:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.085 19:14:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:10.085 ************************************ 00:13:10.085 END TEST xnvme_bdevperf 00:13:10.085 ************************************ 00:13:10.085 19:14:54 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:10.085 19:14:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:10.085 19:14:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.085 19:14:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.085 ************************************ 00:13:10.085 START TEST xnvme_fio_plugin 00:13:10.085 ************************************ 00:13:10.085 19:14:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:10.085 19:14:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:10.085 19:14:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:10.085 19:14:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:10.085 19:14:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:10.085 19:14:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:10.086 19:14:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:10.086 19:14:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:10.086 19:14:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:10.086 19:14:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:10.086 19:14:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:10.086 19:14:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:10.086 19:14:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:10.086 19:14:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:10.086 19:14:54 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:10.086 19:14:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:10.086 19:14:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:10.086 19:14:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:10.086 19:14:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:10.086 19:14:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:10.086 19:14:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:10.086 19:14:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:10.086 19:14:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:10.086 19:14:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:10.086 { 00:13:10.086 "subsystems": [ 00:13:10.086 { 00:13:10.086 "subsystem": "bdev", 00:13:10.086 "config": [ 00:13:10.086 { 00:13:10.086 "params": { 00:13:10.086 "io_mechanism": "libaio", 00:13:10.086 "conserve_cpu": true, 00:13:10.086 "filename": "/dev/nvme0n1", 00:13:10.086 "name": "xnvme_bdev" 00:13:10.086 }, 00:13:10.086 "method": "bdev_xnvme_create" 00:13:10.086 }, 00:13:10.086 { 00:13:10.086 "method": "bdev_wait_for_examine" 00:13:10.086 } 00:13:10.086 ] 00:13:10.086 } 00:13:10.086 ] 00:13:10.086 } 00:13:10.347 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:10.347 fio-3.35 00:13:10.347 Starting 1 thread 00:13:16.945 00:13:16.945 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71520: Mon Dec 16 19:15:00 2024 00:13:16.945 read: IOPS=31.2k, BW=122MiB/s (128MB/s)(609MiB/5001msec) 00:13:16.945 slat (usec): min=4, max=2818, avg=22.72, stdev=106.02 00:13:16.945 clat (usec): min=70, max=5062, avg=1443.58, stdev=522.00 00:13:16.945 lat (usec): min=185, max=5170, avg=1466.30, stdev=509.66 00:13:16.945 clat percentiles (usec): 00:13:16.945 | 1.00th=[ 297], 5.00th=[ 627], 10.00th=[ 791], 20.00th=[ 1012], 00:13:16.945 | 30.00th=[ 1172], 40.00th=[ 1319], 50.00th=[ 1450], 60.00th=[ 1565], 00:13:16.945 | 70.00th=[ 1680], 80.00th=[ 1827], 90.00th=[ 2057], 95.00th=[ 2278], 00:13:16.945 | 99.00th=[ 2966], 99.50th=[ 3261], 99.90th=[ 3949], 99.95th=[ 4146], 00:13:16.945 | 99.99th=[ 4490] 00:13:16.945 bw ( KiB/s): min=118928, max=132136, per=100.00%, avg=125109.33, stdev=5168.02, samples=9 00:13:16.945 iops : min=29732, max=33034, avg=31277.33, stdev=1292.01, samples=9 00:13:16.945 lat (usec) : 100=0.01%, 250=0.53%, 500=2.24%, 750=5.78%, 1000=10.97% 00:13:16.945 lat (msec) : 2=68.88%, 4=11.52%, 10=0.08% 00:13:16.945 cpu : usr=42.08%, sys=49.78%, ctx=32, majf=0, minf=764 00:13:16.945 IO depths : 1=0.5%, 2=1.2%, 4=3.1%, 8=8.0%, 16=22.3%, 32=62.7%, >=64=2.1% 00:13:16.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.945 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:16.945 issued rwts: total=155817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.945 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:16.945 00:13:16.945 Run status group 0 (all jobs): 00:13:16.945 READ: bw=122MiB/s (128MB/s), 122MiB/s-122MiB/s (128MB/s-128MB/s), io=609MiB (638MB), run=5001-5001msec 00:13:17.207 ----------------------------------------------------- 00:13:17.207 Suppressions used: 00:13:17.207 count bytes template 00:13:17.207 1 11 /usr/src/fio/parse.c 00:13:17.207 1 8 libtcmalloc_minimal.so 00:13:17.207 1 904 libcrypto.so 00:13:17.207 ----------------------------------------------------- 00:13:17.207 00:13:17.207 19:15:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:17.207 19:15:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:17.207 19:15:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:17.207 19:15:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:17.207 19:15:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:17.207 19:15:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:17.207 19:15:01 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:17.207 19:15:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:17.207 19:15:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:17.207 19:15:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:17.207 19:15:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:17.207 19:15:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:17.207 19:15:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:17.207 19:15:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:17.207 19:15:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:17.207 19:15:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:17.207 19:15:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:17.207 19:15:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:17.207 19:15:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:17.207 19:15:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:17.207 19:15:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:17.207 { 00:13:17.207 "subsystems": [ 00:13:17.207 { 00:13:17.207 "subsystem": "bdev", 00:13:17.207 "config": [ 00:13:17.207 { 00:13:17.207 "params": { 00:13:17.207 "io_mechanism": "libaio", 00:13:17.207 "conserve_cpu": true, 00:13:17.207 "filename": "/dev/nvme0n1", 00:13:17.207 "name": "xnvme_bdev" 00:13:17.207 }, 00:13:17.207 "method": "bdev_xnvme_create" 00:13:17.207 }, 00:13:17.207 { 00:13:17.207 "method": "bdev_wait_for_examine" 00:13:17.207 } 00:13:17.207 ] 00:13:17.207 } 00:13:17.207 ] 00:13:17.207 } 00:13:17.468 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:17.468 fio-3.35 00:13:17.468 Starting 1 thread 00:13:24.058 00:13:24.058 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71612: Mon Dec 16 19:15:07 2024 00:13:24.058 write: IOPS=30.0k, BW=117MiB/s (123MB/s)(585MiB/5002msec); 0 zone resets 00:13:24.058 slat (usec): min=4, max=1821, avg=26.63, stdev=101.84 00:13:24.058 clat (usec): min=106, max=7468, avg=1409.33, stdev=597.16 00:13:24.058 lat (usec): min=201, max=7474, avg=1435.96, stdev=587.84 00:13:24.058 clat percentiles (usec): 00:13:24.058 | 1.00th=[ 269], 5.00th=[ 494], 10.00th=[ 660], 20.00th=[ 898], 00:13:24.058 | 30.00th=[ 1074], 40.00th=[ 1237], 50.00th=[ 1385], 60.00th=[ 1532], 00:13:24.058 | 70.00th=[ 1680], 80.00th=[ 1860], 90.00th=[ 2147], 95.00th=[ 2442], 00:13:24.058 | 99.00th=[ 3097], 99.50th=[ 3392], 99.90th=[ 4228], 99.95th=[ 4490], 00:13:24.058 | 99.99th=[ 4948] 00:13:24.058 bw ( KiB/s): min=113888, max=124416, per=100.00%, avg=119999.11, stdev=3289.01, samples=9 00:13:24.058 iops : min=28472, max=31104, avg=29999.78, stdev=822.25, samples=9 00:13:24.058 lat (usec) : 250=0.77%, 500=4.40%, 750=8.36%, 1000=12.19% 00:13:24.058 lat (msec) : 2=60.20%, 4=13.94%, 10=0.15% 00:13:24.058 cpu : usr=33.87%, sys=56.91%, ctx=7, majf=0, minf=765 00:13:24.058 IO depths : 1=0.3%, 2=1.0%, 4=2.9%, 8=8.4%, 16=23.7%, 32=61.6%, >=64=2.0% 00:13:24.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.058 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:24.058 issued rwts: total=0,149834,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.058 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:24.058 00:13:24.058 Run status group 0 (all jobs): 00:13:24.058 WRITE: bw=117MiB/s (123MB/s), 117MiB/s-117MiB/s (123MB/s-123MB/s), io=585MiB (614MB), run=5002-5002msec 00:13:24.058 ----------------------------------------------------- 00:13:24.058 Suppressions used: 00:13:24.058 count bytes template 00:13:24.058 1 11 /usr/src/fio/parse.c 00:13:24.058 1 8 libtcmalloc_minimal.so 00:13:24.058 1 904 libcrypto.so 00:13:24.058 ----------------------------------------------------- 00:13:24.058 00:13:24.058 ************************************ 00:13:24.058 END TEST xnvme_fio_plugin 00:13:24.058 ************************************ 00:13:24.058 00:13:24.058 real 0m14.047s 00:13:24.058 user 0m6.744s 00:13:24.058 sys 0m6.047s 00:13:24.058 19:15:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.058 19:15:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:24.320 19:15:08 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:24.320 19:15:08 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:24.320 19:15:08 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:24.320 19:15:08 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:24.320 19:15:08 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:24.320 19:15:08 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:24.320 19:15:08 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:24.320 19:15:08 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:24.320 19:15:08 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:24.320 19:15:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:24.320 19:15:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.320 19:15:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.320 ************************************ 00:13:24.320 START TEST xnvme_rpc 00:13:24.320 ************************************ 00:13:24.320 19:15:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:24.320 19:15:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:24.320 19:15:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:24.320 19:15:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:24.320 19:15:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:24.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.320 19:15:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71698 00:13:24.320 19:15:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71698 00:13:24.320 19:15:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71698 ']' 00:13:24.320 19:15:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.320 19:15:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.320 19:15:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.320 19:15:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.320 19:15:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.320 19:15:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:24.320 [2024-12-16 19:15:08.572852] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:24.320 [2024-12-16 19:15:08.573401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71698 ] 00:13:24.581 [2024-12-16 19:15:08.735268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.581 [2024-12-16 19:15:08.864783] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.521 xnvme_bdev 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:25.521 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.522 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.522 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.522 19:15:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71698 00:13:25.522 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71698 ']' 00:13:25.522 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71698 00:13:25.522 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:25.522 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:25.522 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71698 00:13:25.522 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:25.522 killing process with pid 71698 00:13:25.522 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:25.522 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71698' 00:13:25.522 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71698 00:13:25.522 19:15:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71698 00:13:27.439 00:13:27.439 real 0m3.118s 00:13:27.439 user 0m3.029s 00:13:27.439 sys 0m0.559s 00:13:27.439 19:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.439 19:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.439 ************************************ 00:13:27.439 END TEST xnvme_rpc 00:13:27.439 ************************************ 00:13:27.439 19:15:11 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:27.439 19:15:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:27.439 19:15:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.439 19:15:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:27.439 ************************************ 00:13:27.439 START TEST xnvme_bdevperf 00:13:27.439 ************************************ 00:13:27.439 19:15:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:27.439 19:15:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:27.439 19:15:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:27.439 19:15:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:27.439 19:15:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:27.439 19:15:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:27.439 19:15:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:27.439 19:15:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:27.439 { 00:13:27.439 "subsystems": [ 00:13:27.439 { 00:13:27.439 "subsystem": "bdev", 00:13:27.439 "config": [ 00:13:27.439 { 00:13:27.439 "params": { 00:13:27.439 "io_mechanism": "io_uring", 00:13:27.439 "conserve_cpu": false, 00:13:27.439 "filename": "/dev/nvme0n1", 00:13:27.439 "name": "xnvme_bdev" 00:13:27.439 }, 00:13:27.439 "method": "bdev_xnvme_create" 00:13:27.439 }, 00:13:27.439 { 00:13:27.439 "method": "bdev_wait_for_examine" 00:13:27.439 } 00:13:27.439 ] 00:13:27.439 } 00:13:27.439 ] 00:13:27.439 } 00:13:27.439 [2024-12-16 19:15:11.729559] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:27.439 [2024-12-16 19:15:11.729925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71772 ] 00:13:27.700 [2024-12-16 19:15:11.894432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.700 [2024-12-16 19:15:12.014336] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.962 Running I/O for 5 seconds... 00:13:30.291 34810.00 IOPS, 135.98 MiB/s [2024-12-16T19:15:15.590Z] 33120.50 IOPS, 129.38 MiB/s [2024-12-16T19:15:16.534Z] 32674.67 IOPS, 127.64 MiB/s [2024-12-16T19:15:17.478Z] 32890.00 IOPS, 128.48 MiB/s [2024-12-16T19:15:17.478Z] 32786.20 IOPS, 128.07 MiB/s 00:13:33.124 Latency(us) 00:13:33.124 [2024-12-16T19:15:17.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.124 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:33.124 xnvme_bdev : 5.00 32764.74 127.99 0.00 0.00 1948.94 412.75 4814.38 00:13:33.124 [2024-12-16T19:15:17.478Z] =================================================================================================================== 00:13:33.124 [2024-12-16T19:15:17.478Z] Total : 32764.74 127.99 0.00 0.00 1948.94 412.75 4814.38 00:13:34.067 19:15:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:34.067 19:15:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:34.067 19:15:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:34.067 19:15:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:34.067 19:15:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:34.067 { 00:13:34.067 "subsystems": [ 00:13:34.067 { 00:13:34.067 "subsystem": "bdev", 00:13:34.067 "config": [ 00:13:34.067 { 00:13:34.067 "params": { 00:13:34.067 "io_mechanism": "io_uring", 00:13:34.068 "conserve_cpu": false, 00:13:34.068 "filename": "/dev/nvme0n1", 00:13:34.068 "name": "xnvme_bdev" 00:13:34.068 }, 00:13:34.068 "method": "bdev_xnvme_create" 00:13:34.068 }, 00:13:34.068 { 00:13:34.068 "method": "bdev_wait_for_examine" 00:13:34.068 } 00:13:34.068 ] 00:13:34.068 } 00:13:34.068 ] 00:13:34.068 } 00:13:34.068 [2024-12-16 19:15:18.262954] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:34.068 [2024-12-16 19:15:18.263093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71853 ] 00:13:34.329 [2024-12-16 19:15:18.429233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.329 [2024-12-16 19:15:18.548922] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.589 Running I/O for 5 seconds... 00:13:36.517 36233.00 IOPS, 141.54 MiB/s [2024-12-16T19:15:22.259Z] 34180.00 IOPS, 133.52 MiB/s [2024-12-16T19:15:23.202Z] 33829.00 IOPS, 132.14 MiB/s [2024-12-16T19:15:24.147Z] 33788.25 IOPS, 131.99 MiB/s [2024-12-16T19:15:24.147Z] 33712.20 IOPS, 131.69 MiB/s 00:13:39.793 Latency(us) 00:13:39.793 [2024-12-16T19:15:24.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.793 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:39.793 xnvme_bdev : 5.00 33711.52 131.69 0.00 0.00 1894.78 390.70 7259.37 00:13:39.793 [2024-12-16T19:15:24.147Z] =================================================================================================================== 00:13:39.793 [2024-12-16T19:15:24.147Z] Total : 33711.52 131.69 0.00 0.00 1894.78 390.70 7259.37 00:13:40.367 00:13:40.367 real 0m12.980s 00:13:40.367 user 0m6.348s 00:13:40.367 sys 0m6.370s 00:13:40.367 ************************************ 00:13:40.367 END TEST xnvme_bdevperf 00:13:40.367 ************************************ 00:13:40.367 19:15:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.367 19:15:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:40.367 19:15:24 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:40.367 19:15:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:40.367 19:15:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.367 19:15:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:40.367 ************************************ 00:13:40.367 START TEST xnvme_fio_plugin 00:13:40.367 ************************************ 00:13:40.367 19:15:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:40.367 19:15:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:40.367 19:15:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:40.367 19:15:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:40.367 19:15:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:40.367 19:15:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:40.367 19:15:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:40.367 19:15:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:40.367 19:15:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:40.367 19:15:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:40.367 19:15:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:40.367 19:15:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:40.367 19:15:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:40.367 19:15:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:40.367 19:15:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:40.367 19:15:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:40.367 19:15:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:40.367 19:15:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:40.367 19:15:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:40.627 19:15:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:40.627 19:15:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:40.627 19:15:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:40.627 19:15:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:40.627 19:15:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:40.627 { 00:13:40.627 "subsystems": [ 00:13:40.627 { 00:13:40.627 "subsystem": "bdev", 00:13:40.627 "config": [ 00:13:40.627 { 00:13:40.627 "params": { 00:13:40.627 "io_mechanism": "io_uring", 00:13:40.627 "conserve_cpu": false, 00:13:40.627 "filename": "/dev/nvme0n1", 00:13:40.627 "name": "xnvme_bdev" 00:13:40.627 }, 00:13:40.627 "method": "bdev_xnvme_create" 00:13:40.627 }, 00:13:40.627 { 00:13:40.627 "method": "bdev_wait_for_examine" 00:13:40.627 } 00:13:40.627 ] 00:13:40.627 } 00:13:40.627 ] 00:13:40.627 } 00:13:40.627 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:40.627 fio-3.35 00:13:40.627 Starting 1 thread 00:13:47.217 00:13:47.217 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71967: Mon Dec 16 19:15:30 2024 00:13:47.217 read: IOPS=33.6k, BW=131MiB/s (138MB/s)(657MiB/5002msec) 00:13:47.217 slat (usec): min=2, max=338, avg= 3.23, stdev= 2.56 00:13:47.217 clat (usec): min=845, max=4253, avg=1775.21, stdev=297.57 00:13:47.217 lat (usec): min=848, max=4256, avg=1778.43, stdev=297.82 00:13:47.217 clat percentiles (usec): 00:13:47.217 | 1.00th=[ 1237], 5.00th=[ 1352], 10.00th=[ 1418], 20.00th=[ 1516], 00:13:47.217 | 30.00th=[ 1598], 40.00th=[ 1680], 50.00th=[ 1762], 60.00th=[ 1844], 00:13:47.217 | 70.00th=[ 1909], 80.00th=[ 1991], 90.00th=[ 2114], 95.00th=[ 2278], 00:13:47.217 | 99.00th=[ 2671], 99.50th=[ 2868], 99.90th=[ 3359], 99.95th=[ 3458], 00:13:47.217 | 99.99th=[ 3752] 00:13:47.217 bw ( KiB/s): min=126464, max=142336, per=100.00%, avg=135252.44, stdev=5165.62, samples=9 00:13:47.217 iops : min=31616, max=35584, avg=33812.89, stdev=1291.26, samples=9 00:13:47.217 lat (usec) : 1000=0.02% 00:13:47.217 lat (msec) : 2=80.24%, 4=19.74%, 10=0.01% 00:13:47.217 cpu : usr=31.83%, sys=66.45%, ctx=69, majf=0, minf=762 00:13:47.217 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:13:47.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.217 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:47.217 issued rwts: total=168181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.217 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:47.217 00:13:47.217 Run status group 0 (all jobs): 00:13:47.217 READ: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=657MiB (689MB), run=5002-5002msec 00:13:47.479 ----------------------------------------------------- 00:13:47.479 Suppressions used: 00:13:47.479 count bytes template 00:13:47.479 1 11 /usr/src/fio/parse.c 00:13:47.479 1 8 libtcmalloc_minimal.so 00:13:47.479 1 904 libcrypto.so 00:13:47.479 ----------------------------------------------------- 00:13:47.479 00:13:47.479 19:15:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:47.479 19:15:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:47.479 19:15:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:47.479 19:15:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:47.479 19:15:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:47.479 19:15:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:47.479 19:15:31 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:47.479 19:15:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:47.479 19:15:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:47.479 19:15:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:47.479 19:15:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:47.479 19:15:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:47.479 19:15:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:47.479 19:15:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:47.479 19:15:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:47.479 19:15:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:47.479 19:15:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:47.479 19:15:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:47.479 19:15:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:47.479 19:15:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:47.479 19:15:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:47.479 { 00:13:47.479 "subsystems": [ 00:13:47.479 { 00:13:47.479 "subsystem": "bdev", 00:13:47.479 "config": [ 00:13:47.479 { 00:13:47.479 "params": { 00:13:47.479 "io_mechanism": "io_uring", 00:13:47.479 "conserve_cpu": false, 00:13:47.479 "filename": "/dev/nvme0n1", 00:13:47.479 "name": "xnvme_bdev" 00:13:47.479 }, 00:13:47.479 "method": "bdev_xnvme_create" 00:13:47.479 }, 00:13:47.479 { 00:13:47.479 "method": "bdev_wait_for_examine" 00:13:47.479 } 00:13:47.479 ] 00:13:47.479 } 00:13:47.479 ] 00:13:47.479 } 00:13:47.479 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:47.479 fio-3.35 00:13:47.479 Starting 1 thread 00:13:54.069 00:13:54.069 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72063: Mon Dec 16 19:15:37 2024 00:13:54.069 write: IOPS=33.2k, BW=130MiB/s (136MB/s)(648MiB/5001msec); 0 zone resets 00:13:54.069 slat (nsec): min=2903, max=66719, avg=3706.88, stdev=1597.78 00:13:54.069 clat (usec): min=438, max=6773, avg=1783.10, stdev=286.35 00:13:54.069 lat (usec): min=442, max=6776, avg=1786.80, stdev=286.61 00:13:54.069 clat percentiles (usec): 00:13:54.070 | 1.00th=[ 1237], 5.00th=[ 1369], 10.00th=[ 1434], 20.00th=[ 1549], 00:13:54.070 | 30.00th=[ 1614], 40.00th=[ 1696], 50.00th=[ 1762], 60.00th=[ 1827], 00:13:54.070 | 70.00th=[ 1909], 80.00th=[ 2008], 90.00th=[ 2147], 95.00th=[ 2278], 00:13:54.070 | 99.00th=[ 2606], 99.50th=[ 2769], 99.90th=[ 3097], 99.95th=[ 3425], 00:13:54.070 | 99.99th=[ 3851] 00:13:54.070 bw ( KiB/s): min=124328, max=143712, per=100.00%, avg=132970.67, stdev=6562.17, samples=9 00:13:54.070 iops : min=31082, max=35928, avg=33242.67, stdev=1640.54, samples=9 00:13:54.070 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:13:54.070 lat (msec) : 2=79.33%, 4=20.64%, 10=0.01% 00:13:54.070 cpu : usr=33.28%, sys=65.70%, ctx=16, majf=0, minf=763 00:13:54.070 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=24.9%, 32=50.2%, >=64=1.6% 00:13:54.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.070 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:54.070 issued rwts: total=0,165974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.070 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:54.070 00:13:54.070 Run status group 0 (all jobs): 00:13:54.070 WRITE: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=648MiB (680MB), run=5001-5001msec 00:13:54.331 ----------------------------------------------------- 00:13:54.331 Suppressions used: 00:13:54.331 count bytes template 00:13:54.331 1 11 /usr/src/fio/parse.c 00:13:54.331 1 8 libtcmalloc_minimal.so 00:13:54.331 1 904 libcrypto.so 00:13:54.331 ----------------------------------------------------- 00:13:54.331 00:13:54.331 00:13:54.331 real 0m13.798s 00:13:54.331 user 0m6.152s 00:13:54.331 sys 0m7.180s 00:13:54.331 19:15:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.331 ************************************ 00:13:54.331 END TEST xnvme_fio_plugin 00:13:54.331 ************************************ 00:13:54.331 19:15:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:54.331 19:15:38 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:54.331 19:15:38 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:54.331 19:15:38 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:54.331 19:15:38 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:54.331 19:15:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:54.331 19:15:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.331 19:15:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:54.331 ************************************ 00:13:54.331 START TEST xnvme_rpc 00:13:54.331 ************************************ 00:13:54.331 19:15:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:54.331 19:15:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:54.331 19:15:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:54.331 19:15:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:54.331 19:15:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:54.331 19:15:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72147 00:13:54.331 19:15:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72147 00:13:54.331 19:15:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72147 ']' 00:13:54.331 19:15:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.331 19:15:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.331 19:15:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.331 19:15:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:54.331 19:15:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.331 19:15:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.331 [2024-12-16 19:15:38.675867] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:54.331 [2024-12-16 19:15:38.676024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72147 ] 00:13:54.592 [2024-12-16 19:15:38.841815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.852 [2024-12-16 19:15:38.965292] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.425 xnvme_bdev 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.425 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.686 19:15:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:55.686 19:15:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:55.686 19:15:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:55.686 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.686 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.686 19:15:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:55.686 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.686 19:15:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:55.686 19:15:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:55.686 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.686 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.686 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.686 19:15:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72147 00:13:55.686 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72147 ']' 00:13:55.686 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72147 00:13:55.686 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:55.686 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:55.686 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72147 00:13:55.686 killing process with pid 72147 00:13:55.686 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:55.686 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:55.686 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72147' 00:13:55.686 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72147 00:13:55.686 19:15:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72147 00:13:57.603 00:13:57.603 real 0m2.933s 00:13:57.603 user 0m2.939s 00:13:57.603 sys 0m0.493s 00:13:57.603 ************************************ 00:13:57.603 END TEST xnvme_rpc 00:13:57.603 ************************************ 00:13:57.603 19:15:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.603 19:15:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.603 19:15:41 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:57.603 19:15:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:57.603 19:15:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.603 19:15:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.604 ************************************ 00:13:57.604 START TEST xnvme_bdevperf 00:13:57.604 ************************************ 00:13:57.604 19:15:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:57.604 19:15:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:57.604 19:15:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:57.604 19:15:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:57.604 19:15:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:57.604 19:15:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:57.604 19:15:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:57.604 19:15:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:57.604 { 00:13:57.604 "subsystems": [ 00:13:57.604 { 00:13:57.604 "subsystem": "bdev", 00:13:57.604 "config": [ 00:13:57.604 { 00:13:57.604 "params": { 00:13:57.604 "io_mechanism": "io_uring", 00:13:57.604 "conserve_cpu": true, 00:13:57.604 "filename": "/dev/nvme0n1", 00:13:57.604 "name": "xnvme_bdev" 00:13:57.604 }, 00:13:57.604 "method": "bdev_xnvme_create" 00:13:57.604 }, 00:13:57.604 { 00:13:57.604 "method": "bdev_wait_for_examine" 00:13:57.604 } 00:13:57.604 ] 00:13:57.604 } 00:13:57.604 ] 00:13:57.604 } 00:13:57.604 [2024-12-16 19:15:41.658707] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:57.604 [2024-12-16 19:15:41.658881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72221 ] 00:13:57.604 [2024-12-16 19:15:41.824803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.604 [2024-12-16 19:15:41.944702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.176 Running I/O for 5 seconds... 00:14:00.066 31832.00 IOPS, 124.34 MiB/s [2024-12-16T19:15:45.364Z] 31515.00 IOPS, 123.11 MiB/s [2024-12-16T19:15:46.306Z] 31696.67 IOPS, 123.82 MiB/s [2024-12-16T19:15:47.252Z] 31793.25 IOPS, 124.19 MiB/s 00:14:02.898 Latency(us) 00:14:02.898 [2024-12-16T19:15:47.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.898 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:02.898 xnvme_bdev : 5.00 31723.92 123.92 0.00 0.00 2013.66 1071.26 7965.14 00:14:02.898 [2024-12-16T19:15:47.252Z] =================================================================================================================== 00:14:02.898 [2024-12-16T19:15:47.252Z] Total : 31723.92 123.92 0.00 0.00 2013.66 1071.26 7965.14 00:14:03.842 19:15:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:03.842 19:15:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:03.842 19:15:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:03.842 19:15:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:03.842 19:15:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:03.842 { 00:14:03.842 "subsystems": [ 00:14:03.842 { 00:14:03.842 "subsystem": "bdev", 00:14:03.842 "config": [ 00:14:03.842 { 00:14:03.842 "params": { 00:14:03.842 "io_mechanism": "io_uring", 00:14:03.842 "conserve_cpu": true, 00:14:03.842 "filename": "/dev/nvme0n1", 00:14:03.842 "name": "xnvme_bdev" 00:14:03.842 }, 00:14:03.842 "method": "bdev_xnvme_create" 00:14:03.842 }, 00:14:03.842 { 00:14:03.842 "method": "bdev_wait_for_examine" 00:14:03.842 } 00:14:03.842 ] 00:14:03.842 } 00:14:03.842 ] 00:14:03.842 } 00:14:03.842 [2024-12-16 19:15:48.122601] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:03.842 [2024-12-16 19:15:48.123075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72296 ] 00:14:04.103 [2024-12-16 19:15:48.290692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.103 [2024-12-16 19:15:48.409581] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.432 Running I/O for 5 seconds... 00:14:06.774 33571.00 IOPS, 131.14 MiB/s [2024-12-16T19:15:52.071Z] 33676.00 IOPS, 131.55 MiB/s [2024-12-16T19:15:53.014Z] 33595.67 IOPS, 131.23 MiB/s [2024-12-16T19:15:53.957Z] 33608.00 IOPS, 131.28 MiB/s 00:14:09.603 Latency(us) 00:14:09.603 [2024-12-16T19:15:53.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.603 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:09.603 xnvme_bdev : 5.00 33496.14 130.84 0.00 0.00 1906.82 535.63 9175.04 00:14:09.603 [2024-12-16T19:15:53.957Z] =================================================================================================================== 00:14:09.603 [2024-12-16T19:15:53.957Z] Total : 33496.14 130.84 0.00 0.00 1906.82 535.63 9175.04 00:14:10.175 00:14:10.175 real 0m12.908s 00:14:10.175 user 0m9.342s 00:14:10.175 sys 0m3.031s 00:14:10.175 19:15:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.175 ************************************ 00:14:10.175 19:15:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:10.175 END TEST xnvme_bdevperf 00:14:10.175 ************************************ 00:14:10.436 19:15:54 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:10.436 19:15:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:10.436 19:15:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.437 19:15:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:10.437 ************************************ 00:14:10.437 START TEST xnvme_fio_plugin 00:14:10.437 ************************************ 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:10.437 19:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:10.437 { 00:14:10.437 "subsystems": [ 00:14:10.437 { 00:14:10.437 "subsystem": "bdev", 00:14:10.437 "config": [ 00:14:10.437 { 00:14:10.437 "params": { 00:14:10.437 "io_mechanism": "io_uring", 00:14:10.437 "conserve_cpu": true, 00:14:10.437 "filename": "/dev/nvme0n1", 00:14:10.437 "name": "xnvme_bdev" 00:14:10.437 }, 00:14:10.437 "method": "bdev_xnvme_create" 00:14:10.437 }, 00:14:10.437 { 00:14:10.437 "method": "bdev_wait_for_examine" 00:14:10.437 } 00:14:10.437 ] 00:14:10.437 } 00:14:10.437 ] 00:14:10.437 } 00:14:10.437 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:10.437 fio-3.35 00:14:10.437 Starting 1 thread 00:14:17.024 00:14:17.024 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72414: Mon Dec 16 19:16:00 2024 00:14:17.024 read: IOPS=31.9k, BW=124MiB/s (131MB/s)(623MiB/5001msec) 00:14:17.024 slat (nsec): min=2883, max=52656, avg=3450.60, stdev=1641.14 00:14:17.024 clat (usec): min=1122, max=3430, avg=1869.58, stdev=267.56 00:14:17.024 lat (usec): min=1125, max=3457, avg=1873.03, stdev=267.79 00:14:17.024 clat percentiles (usec): 00:14:17.024 | 1.00th=[ 1385], 5.00th=[ 1483], 10.00th=[ 1549], 20.00th=[ 1631], 00:14:17.025 | 30.00th=[ 1713], 40.00th=[ 1778], 50.00th=[ 1844], 60.00th=[ 1909], 00:14:17.025 | 70.00th=[ 1991], 80.00th=[ 2089], 90.00th=[ 2212], 95.00th=[ 2343], 00:14:17.025 | 99.00th=[ 2606], 99.50th=[ 2737], 99.90th=[ 2999], 99.95th=[ 3130], 00:14:17.025 | 99.99th=[ 3294] 00:14:17.025 bw ( KiB/s): min=125440, max=130048, per=100.00%, avg=127715.56, stdev=1848.01, samples=9 00:14:17.025 iops : min=31360, max=32512, avg=31928.89, stdev=462.00, samples=9 00:14:17.025 lat (msec) : 2=70.34%, 4=29.66% 00:14:17.025 cpu : usr=64.24%, sys=32.32%, ctx=17, majf=0, minf=762 00:14:17.025 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:17.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.025 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:17.025 issued rwts: total=159360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.025 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:17.025 00:14:17.025 Run status group 0 (all jobs): 00:14:17.025 READ: bw=124MiB/s (131MB/s), 124MiB/s-124MiB/s (131MB/s-131MB/s), io=623MiB (653MB), run=5001-5001msec 00:14:17.286 ----------------------------------------------------- 00:14:17.286 Suppressions used: 00:14:17.286 count bytes template 00:14:17.286 1 11 /usr/src/fio/parse.c 00:14:17.286 1 8 libtcmalloc_minimal.so 00:14:17.286 1 904 libcrypto.so 00:14:17.286 ----------------------------------------------------- 00:14:17.286 00:14:17.286 19:16:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:17.286 19:16:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:17.286 19:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:17.286 19:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:17.286 19:16:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:17.286 19:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:17.286 19:16:01 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:17.286 19:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:17.286 19:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:17.286 19:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:17.286 19:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:17.286 19:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:17.286 19:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:17.286 19:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:17.286 19:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:17.286 19:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:17.286 19:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:17.286 19:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:17.286 19:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:17.286 19:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:17.286 19:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:17.286 { 00:14:17.286 "subsystems": [ 00:14:17.286 { 00:14:17.286 "subsystem": "bdev", 00:14:17.286 "config": [ 00:14:17.286 { 00:14:17.286 "params": { 00:14:17.286 "io_mechanism": "io_uring", 00:14:17.286 "conserve_cpu": true, 00:14:17.286 "filename": "/dev/nvme0n1", 00:14:17.286 "name": "xnvme_bdev" 00:14:17.286 }, 00:14:17.286 "method": "bdev_xnvme_create" 00:14:17.286 }, 00:14:17.286 { 00:14:17.286 "method": "bdev_wait_for_examine" 00:14:17.286 } 00:14:17.286 ] 00:14:17.286 } 00:14:17.286 ] 00:14:17.286 } 00:14:17.548 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:17.548 fio-3.35 00:14:17.548 Starting 1 thread 00:14:24.142 00:14:24.142 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72511: Mon Dec 16 19:16:07 2024 00:14:24.142 write: IOPS=35.0k, BW=137MiB/s (144MB/s)(685MiB/5002msec); 0 zone resets 00:14:24.142 slat (nsec): min=2924, max=59923, avg=3611.00, stdev=1555.72 00:14:24.142 clat (usec): min=889, max=7915, avg=1684.84, stdev=268.13 00:14:24.142 lat (usec): min=892, max=7918, avg=1688.45, stdev=268.38 00:14:24.142 clat percentiles (usec): 00:14:24.142 | 1.00th=[ 1205], 5.00th=[ 1303], 10.00th=[ 1385], 20.00th=[ 1467], 00:14:24.142 | 30.00th=[ 1532], 40.00th=[ 1598], 50.00th=[ 1663], 60.00th=[ 1729], 00:14:24.142 | 70.00th=[ 1795], 80.00th=[ 1876], 90.00th=[ 2024], 95.00th=[ 2147], 00:14:24.142 | 99.00th=[ 2442], 99.50th=[ 2573], 99.90th=[ 3064], 99.95th=[ 3163], 00:14:24.142 | 99.99th=[ 5342] 00:14:24.142 bw ( KiB/s): min=132096, max=148408, per=100.00%, avg=140308.11, stdev=5300.04, samples=9 00:14:24.142 iops : min=33024, max=37102, avg=35077.00, stdev=1325.03, samples=9 00:14:24.142 lat (usec) : 1000=0.02% 00:14:24.142 lat (msec) : 2=88.97%, 4=11.00%, 10=0.01% 00:14:24.142 cpu : usr=78.14%, sys=18.78%, ctx=13, majf=0, minf=763 00:14:24.142 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:14:24.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:24.142 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:24.142 issued rwts: total=0,175313,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:24.142 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:24.142 00:14:24.142 Run status group 0 (all jobs): 00:14:24.142 WRITE: bw=137MiB/s (144MB/s), 137MiB/s-137MiB/s (144MB/s-144MB/s), io=685MiB (718MB), run=5002-5002msec 00:14:24.142 ----------------------------------------------------- 00:14:24.142 Suppressions used: 00:14:24.142 count bytes template 00:14:24.142 1 11 /usr/src/fio/parse.c 00:14:24.142 1 8 libtcmalloc_minimal.so 00:14:24.142 1 904 libcrypto.so 00:14:24.142 ----------------------------------------------------- 00:14:24.142 00:14:24.142 00:14:24.142 real 0m13.830s 00:14:24.142 user 0m9.982s 00:14:24.142 sys 0m3.189s 00:14:24.142 ************************************ 00:14:24.142 END TEST xnvme_fio_plugin 00:14:24.142 ************************************ 00:14:24.142 19:16:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.142 19:16:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:24.142 19:16:08 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:24.142 19:16:08 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:24.142 19:16:08 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:24.142 19:16:08 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:24.142 19:16:08 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:24.142 19:16:08 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:24.142 19:16:08 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:24.142 19:16:08 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:24.142 19:16:08 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:24.142 19:16:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:24.142 19:16:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:24.142 19:16:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:24.142 ************************************ 00:14:24.142 START TEST xnvme_rpc 00:14:24.142 ************************************ 00:14:24.142 19:16:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:24.142 19:16:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:24.142 19:16:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:24.142 19:16:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:24.142 19:16:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:24.142 19:16:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72594 00:14:24.142 19:16:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72594 00:14:24.142 19:16:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72594 ']' 00:14:24.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.142 19:16:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.142 19:16:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:24.142 19:16:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.142 19:16:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:24.142 19:16:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.142 19:16:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:24.403 [2024-12-16 19:16:08.566253] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:24.403 [2024-12-16 19:16:08.566709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72594 ] 00:14:24.403 [2024-12-16 19:16:08.727389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.665 [2024-12-16 19:16:08.856357] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.238 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:25.238 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:25.238 19:16:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:25.238 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.238 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.238 xnvme_bdev 00:14:25.238 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.238 19:16:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:25.238 19:16:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:25.238 19:16:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:25.238 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.238 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72594 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72594 ']' 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72594 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72594 00:14:25.499 killing process with pid 72594 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72594' 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72594 00:14:25.499 19:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72594 00:14:27.415 ************************************ 00:14:27.415 END TEST xnvme_rpc 00:14:27.415 ************************************ 00:14:27.415 00:14:27.415 real 0m2.926s 00:14:27.415 user 0m2.898s 00:14:27.415 sys 0m0.519s 00:14:27.415 19:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.415 19:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.415 19:16:11 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:27.415 19:16:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:27.415 19:16:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.415 19:16:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.415 ************************************ 00:14:27.415 START TEST xnvme_bdevperf 00:14:27.415 ************************************ 00:14:27.415 19:16:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:27.415 19:16:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:27.415 19:16:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:27.415 19:16:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:27.415 19:16:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:27.415 19:16:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:27.415 19:16:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:27.415 19:16:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:27.415 { 00:14:27.415 "subsystems": [ 00:14:27.415 { 00:14:27.415 "subsystem": "bdev", 00:14:27.415 "config": [ 00:14:27.415 { 00:14:27.415 "params": { 00:14:27.415 "io_mechanism": "io_uring_cmd", 00:14:27.415 "conserve_cpu": false, 00:14:27.415 "filename": "/dev/ng0n1", 00:14:27.415 "name": "xnvme_bdev" 00:14:27.415 }, 00:14:27.415 "method": "bdev_xnvme_create" 00:14:27.415 }, 00:14:27.415 { 00:14:27.415 "method": "bdev_wait_for_examine" 00:14:27.415 } 00:14:27.415 ] 00:14:27.415 } 00:14:27.415 ] 00:14:27.415 } 00:14:27.415 [2024-12-16 19:16:11.543122] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:27.415 [2024-12-16 19:16:11.543285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72668 ] 00:14:27.415 [2024-12-16 19:16:11.709268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.676 [2024-12-16 19:16:11.828602] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.936 Running I/O for 5 seconds... 00:14:29.824 33039.00 IOPS, 129.06 MiB/s [2024-12-16T19:16:15.566Z] 33856.00 IOPS, 132.25 MiB/s [2024-12-16T19:16:16.138Z] 34809.00 IOPS, 135.97 MiB/s [2024-12-16T19:16:17.523Z] 35227.00 IOPS, 137.61 MiB/s [2024-12-16T19:16:17.523Z] 35372.40 IOPS, 138.17 MiB/s 00:14:33.169 Latency(us) 00:14:33.169 [2024-12-16T19:16:17.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.169 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:33.169 xnvme_bdev : 5.00 35368.52 138.16 0.00 0.00 1806.07 349.74 4385.87 00:14:33.169 [2024-12-16T19:16:17.524Z] =================================================================================================================== 00:14:33.170 [2024-12-16T19:16:17.524Z] Total : 35368.52 138.16 0.00 0.00 1806.07 349.74 4385.87 00:14:33.740 19:16:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:33.740 19:16:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:33.740 19:16:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:33.740 19:16:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:33.740 19:16:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:33.740 { 00:14:33.740 "subsystems": [ 00:14:33.740 { 00:14:33.740 "subsystem": "bdev", 00:14:33.740 "config": [ 00:14:33.740 { 00:14:33.740 "params": { 00:14:33.740 "io_mechanism": "io_uring_cmd", 00:14:33.740 "conserve_cpu": false, 00:14:33.740 "filename": "/dev/ng0n1", 00:14:33.740 "name": "xnvme_bdev" 00:14:33.740 }, 00:14:33.740 "method": "bdev_xnvme_create" 00:14:33.740 }, 00:14:33.740 { 00:14:33.740 "method": "bdev_wait_for_examine" 00:14:33.740 } 00:14:33.740 ] 00:14:33.740 } 00:14:33.740 ] 00:14:33.740 } 00:14:33.740 [2024-12-16 19:16:17.997514] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:33.740 [2024-12-16 19:16:17.997656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72742 ] 00:14:34.044 [2024-12-16 19:16:18.161728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.044 [2024-12-16 19:16:18.283034] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.323 Running I/O for 5 seconds... 00:14:36.651 34738.00 IOPS, 135.70 MiB/s [2024-12-16T19:16:21.577Z] 34831.50 IOPS, 136.06 MiB/s [2024-12-16T19:16:22.964Z] 34593.33 IOPS, 135.13 MiB/s [2024-12-16T19:16:23.907Z] 35481.75 IOPS, 138.60 MiB/s [2024-12-16T19:16:23.907Z] 35426.80 IOPS, 138.39 MiB/s 00:14:39.553 Latency(us) 00:14:39.553 [2024-12-16T19:16:23.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.554 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:39.554 xnvme_bdev : 5.00 35423.91 138.37 0.00 0.00 1803.12 333.98 6805.66 00:14:39.554 [2024-12-16T19:16:23.908Z] =================================================================================================================== 00:14:39.554 [2024-12-16T19:16:23.908Z] Total : 35423.91 138.37 0.00 0.00 1803.12 333.98 6805.66 00:14:40.126 19:16:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:40.126 19:16:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:40.126 19:16:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:40.126 19:16:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:40.126 19:16:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:40.126 { 00:14:40.126 "subsystems": [ 00:14:40.126 { 00:14:40.126 "subsystem": "bdev", 00:14:40.126 "config": [ 00:14:40.126 { 00:14:40.126 "params": { 00:14:40.126 "io_mechanism": "io_uring_cmd", 00:14:40.126 "conserve_cpu": false, 00:14:40.126 "filename": "/dev/ng0n1", 00:14:40.126 "name": "xnvme_bdev" 00:14:40.126 }, 00:14:40.126 "method": "bdev_xnvme_create" 00:14:40.126 }, 00:14:40.126 { 00:14:40.126 "method": "bdev_wait_for_examine" 00:14:40.126 } 00:14:40.126 ] 00:14:40.126 } 00:14:40.126 ] 00:14:40.126 } 00:14:40.126 [2024-12-16 19:16:24.428890] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:40.126 [2024-12-16 19:16:24.429040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72816 ] 00:14:40.387 [2024-12-16 19:16:24.593896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.387 [2024-12-16 19:16:24.709869] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.648 Running I/O for 5 seconds... 00:14:42.976 77824.00 IOPS, 304.00 MiB/s [2024-12-16T19:16:28.274Z] 78400.00 IOPS, 306.25 MiB/s [2024-12-16T19:16:29.217Z] 78442.67 IOPS, 306.42 MiB/s [2024-12-16T19:16:30.160Z] 81696.00 IOPS, 319.12 MiB/s [2024-12-16T19:16:30.160Z] 84608.00 IOPS, 330.50 MiB/s 00:14:45.806 Latency(us) 00:14:45.806 [2024-12-16T19:16:30.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.806 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:45.806 xnvme_bdev : 5.00 84571.04 330.36 0.00 0.00 753.42 519.88 2747.47 00:14:45.806 [2024-12-16T19:16:30.160Z] =================================================================================================================== 00:14:45.806 [2024-12-16T19:16:30.160Z] Total : 84571.04 330.36 0.00 0.00 753.42 519.88 2747.47 00:14:46.378 19:16:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:46.378 19:16:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:46.378 19:16:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:46.378 19:16:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:46.378 19:16:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:46.378 { 00:14:46.378 "subsystems": [ 00:14:46.378 { 00:14:46.378 "subsystem": "bdev", 00:14:46.378 "config": [ 00:14:46.378 { 00:14:46.378 "params": { 00:14:46.378 "io_mechanism": "io_uring_cmd", 00:14:46.378 "conserve_cpu": false, 00:14:46.378 "filename": "/dev/ng0n1", 00:14:46.378 "name": "xnvme_bdev" 00:14:46.378 }, 00:14:46.378 "method": "bdev_xnvme_create" 00:14:46.378 }, 00:14:46.378 { 00:14:46.378 "method": "bdev_wait_for_examine" 00:14:46.378 } 00:14:46.378 ] 00:14:46.378 } 00:14:46.378 ] 00:14:46.378 } 00:14:46.378 [2024-12-16 19:16:30.610117] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:46.378 [2024-12-16 19:16:30.610257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72885 ] 00:14:46.638 [2024-12-16 19:16:30.757356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.638 [2024-12-16 19:16:30.831568] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.899 Running I/O for 5 seconds... 00:14:48.784 48173.00 IOPS, 188.18 MiB/s [2024-12-16T19:16:34.082Z] 36918.00 IOPS, 144.21 MiB/s [2024-12-16T19:16:35.464Z] 33259.67 IOPS, 129.92 MiB/s [2024-12-16T19:16:36.036Z] 31480.50 IOPS, 122.97 MiB/s [2024-12-16T19:16:36.036Z] 30201.80 IOPS, 117.98 MiB/s 00:14:51.682 Latency(us) 00:14:51.682 [2024-12-16T19:16:36.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.682 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:51.682 xnvme_bdev : 5.00 30184.76 117.91 0.00 0.00 2115.61 63.41 28230.89 00:14:51.682 [2024-12-16T19:16:36.036Z] =================================================================================================================== 00:14:51.682 [2024-12-16T19:16:36.036Z] Total : 30184.76 117.91 0.00 0.00 2115.61 63.41 28230.89 00:14:52.625 00:14:52.625 real 0m25.339s 00:14:52.625 user 0m14.085s 00:14:52.625 sys 0m10.784s 00:14:52.625 19:16:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:52.625 ************************************ 00:14:52.625 END TEST xnvme_bdevperf 00:14:52.625 ************************************ 00:14:52.625 19:16:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:52.625 19:16:36 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:52.625 19:16:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:52.625 19:16:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:52.625 19:16:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:52.625 ************************************ 00:14:52.625 START TEST xnvme_fio_plugin 00:14:52.625 ************************************ 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:52.625 19:16:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:52.625 { 00:14:52.625 "subsystems": [ 00:14:52.625 { 00:14:52.625 "subsystem": "bdev", 00:14:52.625 "config": [ 00:14:52.625 { 00:14:52.625 "params": { 00:14:52.625 "io_mechanism": "io_uring_cmd", 00:14:52.625 "conserve_cpu": false, 00:14:52.625 "filename": "/dev/ng0n1", 00:14:52.625 "name": "xnvme_bdev" 00:14:52.625 }, 00:14:52.625 "method": "bdev_xnvme_create" 00:14:52.625 }, 00:14:52.625 { 00:14:52.625 "method": "bdev_wait_for_examine" 00:14:52.625 } 00:14:52.625 ] 00:14:52.625 } 00:14:52.625 ] 00:14:52.625 } 00:14:52.887 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:52.887 fio-3.35 00:14:52.887 Starting 1 thread 00:14:59.475 00:14:59.475 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73003: Mon Dec 16 19:16:42 2024 00:14:59.475 read: IOPS=42.4k, BW=166MiB/s (174MB/s)(829MiB/5001msec) 00:14:59.475 slat (nsec): min=2896, max=49706, avg=3142.01, stdev=1155.74 00:14:59.475 clat (usec): min=898, max=4451, avg=1385.97, stdev=250.87 00:14:59.475 lat (usec): min=901, max=4482, avg=1389.11, stdev=251.02 00:14:59.475 clat percentiles (usec): 00:14:59.475 | 1.00th=[ 1029], 5.00th=[ 1090], 10.00th=[ 1139], 20.00th=[ 1188], 00:14:59.475 | 30.00th=[ 1237], 40.00th=[ 1287], 50.00th=[ 1319], 60.00th=[ 1369], 00:14:59.475 | 70.00th=[ 1434], 80.00th=[ 1549], 90.00th=[ 1745], 95.00th=[ 1893], 00:14:59.475 | 99.00th=[ 2180], 99.50th=[ 2278], 99.90th=[ 2638], 99.95th=[ 2802], 00:14:59.475 | 99.99th=[ 4228] 00:14:59.475 bw ( KiB/s): min=158208, max=178688, per=98.91%, avg=167841.33, stdev=7222.24, samples=9 00:14:59.475 iops : min=39552, max=44672, avg=41960.33, stdev=1805.56, samples=9 00:14:59.475 lat (usec) : 1000=0.50% 00:14:59.475 lat (msec) : 2=96.80%, 4=2.68%, 10=0.02% 00:14:59.475 cpu : usr=42.24%, sys=56.80%, ctx=7, majf=0, minf=762 00:14:59.475 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:59.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.475 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:59.475 issued rwts: total=212160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.475 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:59.475 00:14:59.475 Run status group 0 (all jobs): 00:14:59.475 READ: bw=166MiB/s (174MB/s), 166MiB/s-166MiB/s (174MB/s-174MB/s), io=829MiB (869MB), run=5001-5001msec 00:14:59.475 ----------------------------------------------------- 00:14:59.475 Suppressions used: 00:14:59.475 count bytes template 00:14:59.475 1 11 /usr/src/fio/parse.c 00:14:59.475 1 8 libtcmalloc_minimal.so 00:14:59.475 1 904 libcrypto.so 00:14:59.475 ----------------------------------------------------- 00:14:59.475 00:14:59.475 19:16:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:59.475 19:16:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.475 19:16:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.475 19:16:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:59.475 19:16:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:59.475 19:16:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:59.475 19:16:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.475 19:16:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:59.475 19:16:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:59.475 19:16:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:59.475 19:16:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:59.475 19:16:43 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:59.475 19:16:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:59.475 19:16:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.475 19:16:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:59.475 19:16:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:59.475 19:16:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:59.475 19:16:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:59.475 19:16:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:59.475 19:16:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:59.475 19:16:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.475 { 00:14:59.475 "subsystems": [ 00:14:59.475 { 00:14:59.475 "subsystem": "bdev", 00:14:59.475 "config": [ 00:14:59.475 { 00:14:59.475 "params": { 00:14:59.475 "io_mechanism": "io_uring_cmd", 00:14:59.475 "conserve_cpu": false, 00:14:59.475 "filename": "/dev/ng0n1", 00:14:59.475 "name": "xnvme_bdev" 00:14:59.475 }, 00:14:59.475 "method": "bdev_xnvme_create" 00:14:59.475 }, 00:14:59.475 { 00:14:59.475 "method": "bdev_wait_for_examine" 00:14:59.475 } 00:14:59.475 ] 00:14:59.475 } 00:14:59.475 ] 00:14:59.475 } 00:14:59.736 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:59.736 fio-3.35 00:14:59.736 Starting 1 thread 00:15:06.381 00:15:06.381 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73094: Mon Dec 16 19:16:49 2024 00:15:06.381 write: IOPS=39.0k, BW=152MiB/s (160MB/s)(763MiB/5013msec); 0 zone resets 00:15:06.381 slat (usec): min=2, max=145, avg= 3.72, stdev= 1.71 00:15:06.381 clat (usec): min=66, max=31980, avg=1502.99, stdev=1335.93 00:15:06.381 lat (usec): min=70, max=31984, avg=1506.71, stdev=1336.08 00:15:06.381 clat percentiles (usec): 00:15:06.381 | 1.00th=[ 734], 5.00th=[ 1004], 10.00th=[ 1090], 20.00th=[ 1172], 00:15:06.381 | 30.00th=[ 1237], 40.00th=[ 1303], 50.00th=[ 1369], 60.00th=[ 1450], 00:15:06.381 | 70.00th=[ 1532], 80.00th=[ 1631], 90.00th=[ 1795], 95.00th=[ 1926], 00:15:06.381 | 99.00th=[ 2474], 99.50th=[14484], 99.90th=[20317], 99.95th=[21627], 00:15:06.381 | 99.99th=[27395] 00:15:06.381 bw ( KiB/s): min=63608, max=179744, per=100.00%, avg=156170.20, stdev=35672.32, samples=10 00:15:06.381 iops : min=15902, max=44936, avg=39042.50, stdev=8918.05, samples=10 00:15:06.381 lat (usec) : 100=0.01%, 250=0.08%, 500=0.43%, 750=0.58%, 1000=3.63% 00:15:06.381 lat (msec) : 2=91.64%, 4=2.96%, 10=0.05%, 20=0.52%, 50=0.12% 00:15:06.381 cpu : usr=40.42%, sys=58.46%, ctx=11, majf=0, minf=763 00:15:06.381 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.4%, 16=23.1%, 32=53.6%, >=64=2.1% 00:15:06.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.381 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.4%, >=64=0.0% 00:15:06.381 issued rwts: total=0,195320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.381 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:06.381 00:15:06.381 Run status group 0 (all jobs): 00:15:06.381 WRITE: bw=152MiB/s (160MB/s), 152MiB/s-152MiB/s (160MB/s-160MB/s), io=763MiB (800MB), run=5013-5013msec 00:15:06.381 ----------------------------------------------------- 00:15:06.381 Suppressions used: 00:15:06.381 count bytes template 00:15:06.381 1 11 /usr/src/fio/parse.c 00:15:06.381 1 8 libtcmalloc_minimal.so 00:15:06.381 1 904 libcrypto.so 00:15:06.381 ----------------------------------------------------- 00:15:06.381 00:15:06.381 00:15:06.381 real 0m13.826s 00:15:06.381 user 0m6.993s 00:15:06.381 sys 0m6.411s 00:15:06.381 19:16:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:06.381 19:16:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:06.381 ************************************ 00:15:06.381 END TEST xnvme_fio_plugin 00:15:06.381 ************************************ 00:15:06.642 19:16:50 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:06.642 19:16:50 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:06.642 19:16:50 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:06.642 19:16:50 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:06.642 19:16:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:06.642 19:16:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:06.642 19:16:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:06.642 ************************************ 00:15:06.642 START TEST xnvme_rpc 00:15:06.642 ************************************ 00:15:06.642 19:16:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:06.642 19:16:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:06.642 19:16:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:06.642 19:16:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:06.642 19:16:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:06.642 19:16:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73178 00:15:06.642 19:16:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73178 00:15:06.642 19:16:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73178 ']' 00:15:06.642 19:16:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.642 19:16:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.642 19:16:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.642 19:16:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.642 19:16:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:06.642 19:16:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.642 [2024-12-16 19:16:50.870745] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:06.642 [2024-12-16 19:16:50.870903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73178 ] 00:15:06.904 [2024-12-16 19:16:51.033692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.904 [2024-12-16 19:16:51.154334] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.847 xnvme_bdev 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.847 19:16:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.847 19:16:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.847 19:16:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73178 00:15:07.847 19:16:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73178 ']' 00:15:07.847 19:16:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73178 00:15:07.847 19:16:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:07.847 19:16:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:07.847 19:16:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73178 00:15:07.847 19:16:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:07.847 killing process with pid 73178 00:15:07.847 19:16:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:07.847 19:16:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73178' 00:15:07.847 19:16:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73178 00:15:07.847 19:16:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73178 00:15:09.763 00:15:09.763 real 0m2.895s 00:15:09.763 user 0m2.912s 00:15:09.763 sys 0m0.463s 00:15:09.763 19:16:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.763 ************************************ 00:15:09.763 END TEST xnvme_rpc 00:15:09.763 ************************************ 00:15:09.763 19:16:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.763 19:16:53 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:09.763 19:16:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:09.763 19:16:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.763 19:16:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:09.763 ************************************ 00:15:09.763 START TEST xnvme_bdevperf 00:15:09.763 ************************************ 00:15:09.763 19:16:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:09.763 19:16:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:09.763 19:16:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:09.763 19:16:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:09.763 19:16:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:09.763 19:16:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:09.763 19:16:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:09.763 19:16:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:09.763 { 00:15:09.763 "subsystems": [ 00:15:09.763 { 00:15:09.763 "subsystem": "bdev", 00:15:09.763 "config": [ 00:15:09.763 { 00:15:09.763 "params": { 00:15:09.763 "io_mechanism": "io_uring_cmd", 00:15:09.763 "conserve_cpu": true, 00:15:09.763 "filename": "/dev/ng0n1", 00:15:09.763 "name": "xnvme_bdev" 00:15:09.763 }, 00:15:09.763 "method": "bdev_xnvme_create" 00:15:09.763 }, 00:15:09.763 { 00:15:09.763 "method": "bdev_wait_for_examine" 00:15:09.763 } 00:15:09.763 ] 00:15:09.763 } 00:15:09.763 ] 00:15:09.763 } 00:15:09.763 [2024-12-16 19:16:53.824555] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:09.763 [2024-12-16 19:16:53.824710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73248 ] 00:15:09.763 [2024-12-16 19:16:53.993625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.025 [2024-12-16 19:16:54.115994] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.287 Running I/O for 5 seconds... 00:15:12.172 34877.00 IOPS, 136.24 MiB/s [2024-12-16T19:16:57.470Z] 37150.50 IOPS, 145.12 MiB/s [2024-12-16T19:16:58.413Z] 37673.67 IOPS, 147.16 MiB/s [2024-12-16T19:16:59.799Z] 38911.25 IOPS, 152.00 MiB/s [2024-12-16T19:16:59.799Z] 38386.60 IOPS, 149.95 MiB/s 00:15:15.445 Latency(us) 00:15:15.445 [2024-12-16T19:16:59.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.445 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:15.445 xnvme_bdev : 5.00 38382.19 149.93 0.00 0.00 1663.49 888.52 7108.14 00:15:15.445 [2024-12-16T19:16:59.799Z] =================================================================================================================== 00:15:15.445 [2024-12-16T19:16:59.799Z] Total : 38382.19 149.93 0.00 0.00 1663.49 888.52 7108.14 00:15:16.017 19:17:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:16.018 19:17:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:16.018 19:17:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:16.018 19:17:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:16.018 19:17:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:16.018 { 00:15:16.018 "subsystems": [ 00:15:16.018 { 00:15:16.018 "subsystem": "bdev", 00:15:16.018 "config": [ 00:15:16.018 { 00:15:16.018 "params": { 00:15:16.018 "io_mechanism": "io_uring_cmd", 00:15:16.018 "conserve_cpu": true, 00:15:16.018 "filename": "/dev/ng0n1", 00:15:16.018 "name": "xnvme_bdev" 00:15:16.018 }, 00:15:16.018 "method": "bdev_xnvme_create" 00:15:16.018 }, 00:15:16.018 { 00:15:16.018 "method": "bdev_wait_for_examine" 00:15:16.018 } 00:15:16.018 ] 00:15:16.018 } 00:15:16.018 ] 00:15:16.018 } 00:15:16.018 [2024-12-16 19:17:00.272773] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:16.018 [2024-12-16 19:17:00.272910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73322 ] 00:15:16.278 [2024-12-16 19:17:00.437541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.278 [2024-12-16 19:17:00.553167] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.539 Running I/O for 5 seconds... 00:15:18.496 38220.00 IOPS, 149.30 MiB/s [2024-12-16T19:17:04.237Z] 38012.00 IOPS, 148.48 MiB/s [2024-12-16T19:17:05.181Z] 38354.67 IOPS, 149.82 MiB/s [2024-12-16T19:17:06.124Z] 38106.75 IOPS, 148.85 MiB/s 00:15:21.770 Latency(us) 00:15:21.770 [2024-12-16T19:17:06.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.770 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:21.770 xnvme_bdev : 5.00 37857.06 147.88 0.00 0.00 1685.93 724.68 5545.35 00:15:21.770 [2024-12-16T19:17:06.124Z] =================================================================================================================== 00:15:21.770 [2024-12-16T19:17:06.124Z] Total : 37857.06 147.88 0.00 0.00 1685.93 724.68 5545.35 00:15:22.343 19:17:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:22.343 19:17:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:22.343 19:17:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:22.343 19:17:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:22.343 19:17:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:22.343 { 00:15:22.343 "subsystems": [ 00:15:22.343 { 00:15:22.343 "subsystem": "bdev", 00:15:22.343 "config": [ 00:15:22.343 { 00:15:22.343 "params": { 00:15:22.343 "io_mechanism": "io_uring_cmd", 00:15:22.343 "conserve_cpu": true, 00:15:22.343 "filename": "/dev/ng0n1", 00:15:22.343 "name": "xnvme_bdev" 00:15:22.343 }, 00:15:22.343 "method": "bdev_xnvme_create" 00:15:22.343 }, 00:15:22.343 { 00:15:22.343 "method": "bdev_wait_for_examine" 00:15:22.343 } 00:15:22.343 ] 00:15:22.343 } 00:15:22.343 ] 00:15:22.343 } 00:15:22.604 [2024-12-16 19:17:06.711664] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:22.604 [2024-12-16 19:17:06.711798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73396 ] 00:15:22.604 [2024-12-16 19:17:06.875357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.866 [2024-12-16 19:17:06.994324] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.126 Running I/O for 5 seconds... 00:15:25.011 79040.00 IOPS, 308.75 MiB/s [2024-12-16T19:17:10.313Z] 78912.00 IOPS, 308.25 MiB/s [2024-12-16T19:17:11.295Z] 78741.33 IOPS, 307.58 MiB/s [2024-12-16T19:17:12.679Z] 78928.00 IOPS, 308.31 MiB/s [2024-12-16T19:17:12.679Z] 81305.60 IOPS, 317.60 MiB/s 00:15:28.325 Latency(us) 00:15:28.325 [2024-12-16T19:17:12.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.325 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:28.325 xnvme_bdev : 5.00 81265.51 317.44 0.00 0.00 784.13 395.42 3478.45 00:15:28.325 [2024-12-16T19:17:12.679Z] =================================================================================================================== 00:15:28.325 [2024-12-16T19:17:12.679Z] Total : 81265.51 317.44 0.00 0.00 784.13 395.42 3478.45 00:15:28.585 19:17:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:28.586 19:17:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:28.586 19:17:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:28.586 19:17:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:28.586 19:17:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:28.586 { 00:15:28.586 "subsystems": [ 00:15:28.586 { 00:15:28.586 "subsystem": "bdev", 00:15:28.586 "config": [ 00:15:28.586 { 00:15:28.586 "params": { 00:15:28.586 "io_mechanism": "io_uring_cmd", 00:15:28.586 "conserve_cpu": true, 00:15:28.586 "filename": "/dev/ng0n1", 00:15:28.586 "name": "xnvme_bdev" 00:15:28.586 }, 00:15:28.586 "method": "bdev_xnvme_create" 00:15:28.586 }, 00:15:28.586 { 00:15:28.586 "method": "bdev_wait_for_examine" 00:15:28.586 } 00:15:28.586 ] 00:15:28.586 } 00:15:28.586 ] 00:15:28.586 } 00:15:28.586 [2024-12-16 19:17:12.906729] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:28.586 [2024-12-16 19:17:12.906838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73465 ] 00:15:28.847 [2024-12-16 19:17:13.068065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.847 [2024-12-16 19:17:13.183256] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.419 Running I/O for 5 seconds... 00:15:31.304 41997.00 IOPS, 164.05 MiB/s [2024-12-16T19:17:16.597Z] 41446.00 IOPS, 161.90 MiB/s [2024-12-16T19:17:17.539Z] 40754.00 IOPS, 159.20 MiB/s [2024-12-16T19:17:18.482Z] 39547.50 IOPS, 154.48 MiB/s [2024-12-16T19:17:18.482Z] 37384.80 IOPS, 146.03 MiB/s 00:15:34.128 Latency(us) 00:15:34.128 [2024-12-16T19:17:18.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.128 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:34.128 xnvme_bdev : 5.01 37345.61 145.88 0.00 0.00 1708.08 72.47 23492.14 00:15:34.128 [2024-12-16T19:17:18.482Z] =================================================================================================================== 00:15:34.128 [2024-12-16T19:17:18.482Z] Total : 37345.61 145.88 0.00 0.00 1708.08 72.47 23492.14 00:15:35.070 00:15:35.070 real 0m25.523s 00:15:35.070 user 0m16.802s 00:15:35.070 sys 0m6.619s 00:15:35.070 ************************************ 00:15:35.070 END TEST xnvme_bdevperf 00:15:35.070 ************************************ 00:15:35.070 19:17:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:35.070 19:17:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:35.070 19:17:19 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:35.070 19:17:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:35.070 19:17:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:35.070 19:17:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:35.070 ************************************ 00:15:35.070 START TEST xnvme_fio_plugin 00:15:35.070 ************************************ 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:35.070 19:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:35.070 { 00:15:35.070 "subsystems": [ 00:15:35.070 { 00:15:35.070 "subsystem": "bdev", 00:15:35.070 "config": [ 00:15:35.070 { 00:15:35.070 "params": { 00:15:35.070 "io_mechanism": "io_uring_cmd", 00:15:35.070 "conserve_cpu": true, 00:15:35.070 "filename": "/dev/ng0n1", 00:15:35.070 "name": "xnvme_bdev" 00:15:35.070 }, 00:15:35.070 "method": "bdev_xnvme_create" 00:15:35.070 }, 00:15:35.070 { 00:15:35.070 "method": "bdev_wait_for_examine" 00:15:35.070 } 00:15:35.070 ] 00:15:35.070 } 00:15:35.070 ] 00:15:35.070 } 00:15:35.330 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:35.330 fio-3.35 00:15:35.330 Starting 1 thread 00:15:41.919 00:15:41.919 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73588: Mon Dec 16 19:17:25 2024 00:15:41.919 read: IOPS=36.3k, BW=142MiB/s (149MB/s)(709MiB/5001msec) 00:15:41.919 slat (nsec): min=2898, max=96750, avg=3833.99, stdev=1987.52 00:15:41.919 clat (usec): min=848, max=4091, avg=1609.91, stdev=269.24 00:15:41.919 lat (usec): min=851, max=4094, avg=1613.74, stdev=269.66 00:15:41.919 clat percentiles (usec): 00:15:41.919 | 1.00th=[ 1074], 5.00th=[ 1188], 10.00th=[ 1270], 20.00th=[ 1385], 00:15:41.919 | 30.00th=[ 1467], 40.00th=[ 1532], 50.00th=[ 1582], 60.00th=[ 1647], 00:15:41.919 | 70.00th=[ 1729], 80.00th=[ 1811], 90.00th=[ 1975], 95.00th=[ 2114], 00:15:41.919 | 99.00th=[ 2343], 99.50th=[ 2442], 99.90th=[ 2638], 99.95th=[ 2737], 00:15:41.919 | 99.99th=[ 3032] 00:15:41.919 bw ( KiB/s): min=134656, max=174243, per=100.00%, avg=146279.44, stdev=11262.44, samples=9 00:15:41.919 iops : min=33664, max=43560, avg=36569.78, stdev=2815.38, samples=9 00:15:41.919 lat (usec) : 1000=0.22% 00:15:41.919 lat (msec) : 2=91.29%, 4=8.49%, 10=0.01% 00:15:41.919 cpu : usr=50.66%, sys=46.04%, ctx=27, majf=0, minf=762 00:15:41.919 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:41.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.919 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:41.919 issued rwts: total=181439,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.919 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:41.919 00:15:41.919 Run status group 0 (all jobs): 00:15:41.919 READ: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=709MiB (743MB), run=5001-5001msec 00:15:41.919 ----------------------------------------------------- 00:15:41.919 Suppressions used: 00:15:41.919 count bytes template 00:15:41.919 1 11 /usr/src/fio/parse.c 00:15:41.919 1 8 libtcmalloc_minimal.so 00:15:41.919 1 904 libcrypto.so 00:15:41.919 ----------------------------------------------------- 00:15:41.919 00:15:41.919 19:17:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:41.919 19:17:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:41.919 19:17:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:41.920 19:17:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:41.920 19:17:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:41.920 19:17:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:41.920 19:17:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.920 19:17:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:41.920 19:17:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:41.920 19:17:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:41.920 19:17:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:41.920 19:17:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:41.920 19:17:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:41.920 19:17:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.920 19:17:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:41.920 19:17:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:41.920 19:17:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:41.920 19:17:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:41.920 19:17:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:41.920 19:17:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:41.920 19:17:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:42.181 { 00:15:42.181 "subsystems": [ 00:15:42.181 { 00:15:42.181 "subsystem": "bdev", 00:15:42.181 "config": [ 00:15:42.181 { 00:15:42.181 "params": { 00:15:42.181 "io_mechanism": "io_uring_cmd", 00:15:42.181 "conserve_cpu": true, 00:15:42.181 "filename": "/dev/ng0n1", 00:15:42.181 "name": "xnvme_bdev" 00:15:42.181 }, 00:15:42.181 "method": "bdev_xnvme_create" 00:15:42.181 }, 00:15:42.181 { 00:15:42.181 "method": "bdev_wait_for_examine" 00:15:42.181 } 00:15:42.181 ] 00:15:42.181 } 00:15:42.181 ] 00:15:42.181 } 00:15:42.181 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:42.181 fio-3.35 00:15:42.181 Starting 1 thread 00:15:48.771 00:15:48.771 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73676: Mon Dec 16 19:17:32 2024 00:15:48.771 write: IOPS=37.3k, BW=146MiB/s (153MB/s)(729MiB/5002msec); 0 zone resets 00:15:48.771 slat (usec): min=2, max=636, avg= 4.17, stdev= 3.28 00:15:48.771 clat (usec): min=414, max=4794, avg=1546.57, stdev=296.68 00:15:48.771 lat (usec): min=418, max=4798, avg=1550.74, stdev=297.44 00:15:48.771 clat percentiles (usec): 00:15:48.771 | 1.00th=[ 1037], 5.00th=[ 1123], 10.00th=[ 1188], 20.00th=[ 1303], 00:15:48.771 | 30.00th=[ 1385], 40.00th=[ 1450], 50.00th=[ 1516], 60.00th=[ 1582], 00:15:48.771 | 70.00th=[ 1663], 80.00th=[ 1762], 90.00th=[ 1926], 95.00th=[ 2057], 00:15:48.771 | 99.00th=[ 2409], 99.50th=[ 2573], 99.90th=[ 3195], 99.95th=[ 3523], 00:15:48.771 | 99.99th=[ 3916] 00:15:48.771 bw ( KiB/s): min=139184, max=161552, per=99.62%, avg=148750.22, stdev=9475.85, samples=9 00:15:48.771 iops : min=34796, max=40388, avg=37187.56, stdev=2368.96, samples=9 00:15:48.771 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.30% 00:15:48.771 lat (msec) : 2=92.86%, 4=6.82%, 10=0.01% 00:15:48.771 cpu : usr=54.03%, sys=41.35%, ctx=22, majf=0, minf=763 00:15:48.771 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.5%, 16=25.0%, 32=50.2%, >=64=1.6% 00:15:48.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.771 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:48.771 issued rwts: total=0,186721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:48.771 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:48.771 00:15:48.771 Run status group 0 (all jobs): 00:15:48.771 WRITE: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=729MiB (765MB), run=5002-5002msec 00:15:48.771 ----------------------------------------------------- 00:15:48.771 Suppressions used: 00:15:48.771 count bytes template 00:15:48.771 1 11 /usr/src/fio/parse.c 00:15:48.771 1 8 libtcmalloc_minimal.so 00:15:48.771 1 904 libcrypto.so 00:15:48.771 ----------------------------------------------------- 00:15:48.771 00:15:48.771 00:15:48.771 real 0m13.782s 00:15:48.771 user 0m8.092s 00:15:48.771 sys 0m4.976s 00:15:48.771 19:17:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.771 ************************************ 00:15:48.771 END TEST xnvme_fio_plugin 00:15:48.771 ************************************ 00:15:48.771 19:17:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:49.032 19:17:33 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73178 00:15:49.032 19:17:33 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73178 ']' 00:15:49.032 19:17:33 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73178 00:15:49.032 Process with pid 73178 is not found 00:15:49.032 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73178) - No such process 00:15:49.032 19:17:33 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73178 is not found' 00:15:49.032 ************************************ 00:15:49.032 END TEST nvme_xnvme 00:15:49.032 ************************************ 00:15:49.032 00:15:49.032 real 3m32.724s 00:15:49.032 user 1m59.833s 00:15:49.032 sys 1m18.538s 00:15:49.032 19:17:33 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.032 19:17:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:49.032 19:17:33 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:49.032 19:17:33 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:49.032 19:17:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.032 19:17:33 -- common/autotest_common.sh@10 -- # set +x 00:15:49.032 ************************************ 00:15:49.032 START TEST blockdev_xnvme 00:15:49.032 ************************************ 00:15:49.032 19:17:33 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:49.032 * Looking for test storage... 00:15:49.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:49.032 19:17:33 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:49.032 19:17:33 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:49.032 19:17:33 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:49.032 19:17:33 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:49.032 19:17:33 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:49.294 19:17:33 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:49.294 19:17:33 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:49.294 19:17:33 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:49.294 19:17:33 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:49.294 19:17:33 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:49.294 19:17:33 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:49.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.294 --rc genhtml_branch_coverage=1 00:15:49.294 --rc genhtml_function_coverage=1 00:15:49.294 --rc genhtml_legend=1 00:15:49.294 --rc geninfo_all_blocks=1 00:15:49.294 --rc geninfo_unexecuted_blocks=1 00:15:49.294 00:15:49.294 ' 00:15:49.294 19:17:33 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:49.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.294 --rc genhtml_branch_coverage=1 00:15:49.294 --rc genhtml_function_coverage=1 00:15:49.294 --rc genhtml_legend=1 00:15:49.294 --rc geninfo_all_blocks=1 00:15:49.294 --rc geninfo_unexecuted_blocks=1 00:15:49.294 00:15:49.294 ' 00:15:49.294 19:17:33 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:49.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.294 --rc genhtml_branch_coverage=1 00:15:49.294 --rc genhtml_function_coverage=1 00:15:49.294 --rc genhtml_legend=1 00:15:49.294 --rc geninfo_all_blocks=1 00:15:49.294 --rc geninfo_unexecuted_blocks=1 00:15:49.294 00:15:49.294 ' 00:15:49.294 19:17:33 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:49.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.294 --rc genhtml_branch_coverage=1 00:15:49.294 --rc genhtml_function_coverage=1 00:15:49.294 --rc genhtml_legend=1 00:15:49.294 --rc geninfo_all_blocks=1 00:15:49.294 --rc geninfo_unexecuted_blocks=1 00:15:49.294 00:15:49.294 ' 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73810 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73810 00:15:49.294 19:17:33 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73810 ']' 00:15:49.294 19:17:33 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.294 19:17:33 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.294 19:17:33 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:49.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.294 19:17:33 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.294 19:17:33 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.294 19:17:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:49.294 [2024-12-16 19:17:33.489703] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:49.294 [2024-12-16 19:17:33.490037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73810 ] 00:15:49.556 [2024-12-16 19:17:33.656423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.556 [2024-12-16 19:17:33.774302] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.128 19:17:34 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:50.128 19:17:34 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:50.128 19:17:34 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:50.128 19:17:34 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:50.128 19:17:34 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:50.128 19:17:34 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:50.128 19:17:34 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:50.699 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:51.276 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:51.276 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:51.276 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:51.276 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.276 nvme0n1 00:15:51.276 nvme0n2 00:15:51.276 nvme0n3 00:15:51.276 nvme1n1 00:15:51.276 nvme2n1 00:15:51.276 nvme3n1 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.276 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.276 19:17:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.538 19:17:35 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.538 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:51.538 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:51.538 19:17:35 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.538 19:17:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.538 19:17:35 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.538 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:51.538 19:17:35 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.538 19:17:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.538 19:17:35 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.538 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:51.538 19:17:35 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.538 19:17:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.538 19:17:35 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.538 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:51.538 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:51.538 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:51.538 19:17:35 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.538 19:17:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.538 19:17:35 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.538 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:51.538 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "6130f36d-8392-4a2b-856c-eea1a06bd9ae"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6130f36d-8392-4a2b-856c-eea1a06bd9ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "f54b085b-9f53-4160-b015-61befe715143"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f54b085b-9f53-4160-b015-61befe715143",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "24bc4445-df25-4b0f-a60a-981c5a9a388c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "24bc4445-df25-4b0f-a60a-981c5a9a388c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "8addc0d0-015d-4467-b8f2-8fca403885d2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8addc0d0-015d-4467-b8f2-8fca403885d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "61c3f9b4-a7cf-4830-8d99-4bd23bf53ab9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "61c3f9b4-a7cf-4830-8d99-4bd23bf53ab9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "bac8260f-df62-4ff9-b9c0-cf4b191a5915"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "bac8260f-df62-4ff9-b9c0-cf4b191a5915",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:51.538 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:51.538 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:51.538 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:51.539 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:51.539 19:17:35 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 73810 00:15:51.539 19:17:35 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73810 ']' 00:15:51.539 19:17:35 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73810 00:15:51.539 19:17:35 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:51.539 19:17:35 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.539 19:17:35 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73810 00:15:51.539 killing process with pid 73810 00:15:51.539 19:17:35 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.539 19:17:35 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.539 19:17:35 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73810' 00:15:51.539 19:17:35 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73810 00:15:51.539 19:17:35 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73810 00:15:53.453 19:17:37 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:53.453 19:17:37 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:53.453 19:17:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:53.453 19:17:37 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.453 19:17:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.453 ************************************ 00:15:53.453 START TEST bdev_hello_world 00:15:53.453 ************************************ 00:15:53.453 19:17:37 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:53.453 [2024-12-16 19:17:37.522470] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:53.453 [2024-12-16 19:17:37.522615] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74094 ] 00:15:53.453 [2024-12-16 19:17:37.684875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.453 [2024-12-16 19:17:37.803762] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.024 [2024-12-16 19:17:38.205508] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:54.024 [2024-12-16 19:17:38.205817] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:54.024 [2024-12-16 19:17:38.205845] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:54.024 [2024-12-16 19:17:38.207995] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:54.024 [2024-12-16 19:17:38.209130] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:54.024 [2024-12-16 19:17:38.209341] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:54.024 [2024-12-16 19:17:38.209796] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:54.024 00:15:54.024 [2024-12-16 19:17:38.209829] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:55.031 00:15:55.031 real 0m1.542s 00:15:55.031 user 0m1.180s 00:15:55.031 ************************************ 00:15:55.031 END TEST bdev_hello_world 00:15:55.031 ************************************ 00:15:55.031 sys 0m0.214s 00:15:55.031 19:17:38 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:55.031 19:17:38 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:55.031 19:17:39 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:55.031 19:17:39 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:55.031 19:17:39 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:55.031 19:17:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:55.031 ************************************ 00:15:55.031 START TEST bdev_bounds 00:15:55.031 ************************************ 00:15:55.031 19:17:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:55.031 19:17:39 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74131 00:15:55.031 Process bdevio pid: 74131 00:15:55.031 19:17:39 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:55.031 19:17:39 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74131' 00:15:55.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.031 19:17:39 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74131 00:15:55.031 19:17:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74131 ']' 00:15:55.031 19:17:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.031 19:17:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.031 19:17:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.031 19:17:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.031 19:17:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:55.031 19:17:39 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:55.031 [2024-12-16 19:17:39.142093] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:55.031 [2024-12-16 19:17:39.142505] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74131 ] 00:15:55.031 [2024-12-16 19:17:39.300325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:55.292 [2024-12-16 19:17:39.418574] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.292 [2024-12-16 19:17:39.418906] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.292 [2024-12-16 19:17:39.418910] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.865 19:17:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.865 19:17:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:55.865 19:17:40 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:55.865 I/O targets: 00:15:55.865 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:55.865 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:55.865 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:55.865 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:55.865 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:55.865 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:55.865 00:15:55.865 00:15:55.865 CUnit - A unit testing framework for C - Version 2.1-3 00:15:55.865 http://cunit.sourceforge.net/ 00:15:55.865 00:15:55.865 00:15:55.865 Suite: bdevio tests on: nvme3n1 00:15:55.865 Test: blockdev write read block ...passed 00:15:55.865 Test: blockdev write zeroes read block ...passed 00:15:55.865 Test: blockdev write zeroes read no split ...passed 00:15:55.865 Test: blockdev write zeroes read split ...passed 00:15:55.865 Test: blockdev write zeroes read split partial ...passed 00:15:55.865 Test: blockdev reset ...passed 00:15:55.865 Test: blockdev write read 8 blocks ...passed 00:15:55.865 Test: blockdev write read size > 128k ...passed 00:15:55.865 Test: blockdev write read invalid size ...passed 00:15:55.865 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.865 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.865 Test: blockdev write read max offset ...passed 00:15:55.865 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.865 Test: blockdev writev readv 8 blocks ...passed 00:15:55.865 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.865 Test: blockdev writev readv block ...passed 00:15:55.865 Test: blockdev writev readv size > 128k ...passed 00:15:55.865 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.865 Test: blockdev comparev and writev ...passed 00:15:55.865 Test: blockdev nvme passthru rw ...passed 00:15:55.865 Test: blockdev nvme passthru vendor specific ...passed 00:15:55.865 Test: blockdev nvme admin passthru ...passed 00:15:55.865 Test: blockdev copy ...passed 00:15:55.865 Suite: bdevio tests on: nvme2n1 00:15:55.865 Test: blockdev write read block ...passed 00:15:55.865 Test: blockdev write zeroes read block ...passed 00:15:55.865 Test: blockdev write zeroes read no split ...passed 00:15:56.126 Test: blockdev write zeroes read split ...passed 00:15:56.126 Test: blockdev write zeroes read split partial ...passed 00:15:56.126 Test: blockdev reset ...passed 00:15:56.126 Test: blockdev write read 8 blocks ...passed 00:15:56.127 Test: blockdev write read size > 128k ...passed 00:15:56.127 Test: blockdev write read invalid size ...passed 00:15:56.127 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:56.127 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:56.127 Test: blockdev write read max offset ...passed 00:15:56.127 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:56.127 Test: blockdev writev readv 8 blocks ...passed 00:15:56.127 Test: blockdev writev readv 30 x 1block ...passed 00:15:56.127 Test: blockdev writev readv block ...passed 00:15:56.127 Test: blockdev writev readv size > 128k ...passed 00:15:56.127 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:56.127 Test: blockdev comparev and writev ...passed 00:15:56.127 Test: blockdev nvme passthru rw ...passed 00:15:56.127 Test: blockdev nvme passthru vendor specific ...passed 00:15:56.127 Test: blockdev nvme admin passthru ...passed 00:15:56.127 Test: blockdev copy ...passed 00:15:56.127 Suite: bdevio tests on: nvme1n1 00:15:56.127 Test: blockdev write read block ...passed 00:15:56.127 Test: blockdev write zeroes read block ...passed 00:15:56.127 Test: blockdev write zeroes read no split ...passed 00:15:56.127 Test: blockdev write zeroes read split ...passed 00:15:56.127 Test: blockdev write zeroes read split partial ...passed 00:15:56.127 Test: blockdev reset ...passed 00:15:56.127 Test: blockdev write read 8 blocks ...passed 00:15:56.127 Test: blockdev write read size > 128k ...passed 00:15:56.127 Test: blockdev write read invalid size ...passed 00:15:56.127 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:56.127 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:56.127 Test: blockdev write read max offset ...passed 00:15:56.127 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:56.127 Test: blockdev writev readv 8 blocks ...passed 00:15:56.127 Test: blockdev writev readv 30 x 1block ...passed 00:15:56.127 Test: blockdev writev readv block ...passed 00:15:56.127 Test: blockdev writev readv size > 128k ...passed 00:15:56.127 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:56.127 Test: blockdev comparev and writev ...passed 00:15:56.127 Test: blockdev nvme passthru rw ...passed 00:15:56.127 Test: blockdev nvme passthru vendor specific ...passed 00:15:56.127 Test: blockdev nvme admin passthru ...passed 00:15:56.127 Test: blockdev copy ...passed 00:15:56.127 Suite: bdevio tests on: nvme0n3 00:15:56.127 Test: blockdev write read block ...passed 00:15:56.127 Test: blockdev write zeroes read block ...passed 00:15:56.127 Test: blockdev write zeroes read no split ...passed 00:15:56.127 Test: blockdev write zeroes read split ...passed 00:15:56.127 Test: blockdev write zeroes read split partial ...passed 00:15:56.127 Test: blockdev reset ...passed 00:15:56.127 Test: blockdev write read 8 blocks ...passed 00:15:56.127 Test: blockdev write read size > 128k ...passed 00:15:56.127 Test: blockdev write read invalid size ...passed 00:15:56.127 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:56.127 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:56.127 Test: blockdev write read max offset ...passed 00:15:56.127 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:56.127 Test: blockdev writev readv 8 blocks ...passed 00:15:56.127 Test: blockdev writev readv 30 x 1block ...passed 00:15:56.127 Test: blockdev writev readv block ...passed 00:15:56.127 Test: blockdev writev readv size > 128k ...passed 00:15:56.127 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:56.127 Test: blockdev comparev and writev ...passed 00:15:56.127 Test: blockdev nvme passthru rw ...passed 00:15:56.127 Test: blockdev nvme passthru vendor specific ...passed 00:15:56.127 Test: blockdev nvme admin passthru ...passed 00:15:56.127 Test: blockdev copy ...passed 00:15:56.127 Suite: bdevio tests on: nvme0n2 00:15:56.127 Test: blockdev write read block ...passed 00:15:56.127 Test: blockdev write zeroes read block ...passed 00:15:56.127 Test: blockdev write zeroes read no split ...passed 00:15:56.388 Test: blockdev write zeroes read split ...passed 00:15:56.389 Test: blockdev write zeroes read split partial ...passed 00:15:56.389 Test: blockdev reset ...passed 00:15:56.389 Test: blockdev write read 8 blocks ...passed 00:15:56.389 Test: blockdev write read size > 128k ...passed 00:15:56.389 Test: blockdev write read invalid size ...passed 00:15:56.389 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:56.389 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:56.389 Test: blockdev write read max offset ...passed 00:15:56.389 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:56.389 Test: blockdev writev readv 8 blocks ...passed 00:15:56.389 Test: blockdev writev readv 30 x 1block ...passed 00:15:56.389 Test: blockdev writev readv block ...passed 00:15:56.389 Test: blockdev writev readv size > 128k ...passed 00:15:56.389 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:56.389 Test: blockdev comparev and writev ...passed 00:15:56.389 Test: blockdev nvme passthru rw ...passed 00:15:56.389 Test: blockdev nvme passthru vendor specific ...passed 00:15:56.389 Test: blockdev nvme admin passthru ...passed 00:15:56.389 Test: blockdev copy ...passed 00:15:56.389 Suite: bdevio tests on: nvme0n1 00:15:56.389 Test: blockdev write read block ...passed 00:15:56.389 Test: blockdev write zeroes read block ...passed 00:15:56.389 Test: blockdev write zeroes read no split ...passed 00:15:56.389 Test: blockdev write zeroes read split ...passed 00:15:56.389 Test: blockdev write zeroes read split partial ...passed 00:15:56.389 Test: blockdev reset ...passed 00:15:56.389 Test: blockdev write read 8 blocks ...passed 00:15:56.389 Test: blockdev write read size > 128k ...passed 00:15:56.389 Test: blockdev write read invalid size ...passed 00:15:56.389 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:56.389 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:56.389 Test: blockdev write read max offset ...passed 00:15:56.389 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:56.389 Test: blockdev writev readv 8 blocks ...passed 00:15:56.389 Test: blockdev writev readv 30 x 1block ...passed 00:15:56.389 Test: blockdev writev readv block ...passed 00:15:56.389 Test: blockdev writev readv size > 128k ...passed 00:15:56.389 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:56.389 Test: blockdev comparev and writev ...passed 00:15:56.389 Test: blockdev nvme passthru rw ...passed 00:15:56.389 Test: blockdev nvme passthru vendor specific ...passed 00:15:56.389 Test: blockdev nvme admin passthru ...passed 00:15:56.389 Test: blockdev copy ...passed 00:15:56.389 00:15:56.389 Run Summary: Type Total Ran Passed Failed Inactive 00:15:56.389 suites 6 6 n/a 0 0 00:15:56.389 tests 138 138 138 0 0 00:15:56.389 asserts 780 780 780 0 n/a 00:15:56.389 00:15:56.389 Elapsed time = 1.297 seconds 00:15:56.389 0 00:15:56.389 19:17:40 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74131 00:15:56.389 19:17:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74131 ']' 00:15:56.389 19:17:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74131 00:15:56.389 19:17:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:56.389 19:17:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:56.389 19:17:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74131 00:15:56.389 19:17:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:56.389 19:17:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:56.389 19:17:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74131' 00:15:56.389 killing process with pid 74131 00:15:56.389 19:17:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74131 00:15:56.389 19:17:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74131 00:15:57.332 19:17:41 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:57.332 00:15:57.332 real 0m2.402s 00:15:57.332 user 0m5.818s 00:15:57.332 sys 0m0.394s 00:15:57.332 19:17:41 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.332 19:17:41 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:57.332 ************************************ 00:15:57.332 END TEST bdev_bounds 00:15:57.332 ************************************ 00:15:57.333 19:17:41 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:57.333 19:17:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:57.333 19:17:41 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.333 19:17:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:57.333 ************************************ 00:15:57.333 START TEST bdev_nbd 00:15:57.333 ************************************ 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:57.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74185 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74185 /var/tmp/spdk-nbd.sock 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74185 ']' 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.333 19:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:57.333 [2024-12-16 19:17:41.632560] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:57.333 [2024-12-16 19:17:41.632710] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.594 [2024-12-16 19:17:41.799708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.594 [2024-12-16 19:17:41.927306] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.166 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.166 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:58.166 19:17:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:58.166 19:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:58.166 19:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:58.166 19:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:58.166 19:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:58.166 19:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:58.166 19:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:58.166 19:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:58.166 19:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:58.166 19:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:58.166 19:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:58.166 19:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:58.166 19:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:58.428 19:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:58.428 19:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:58.428 19:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:58.428 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:58.428 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:58.428 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.428 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.428 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:58.428 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:58.428 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.428 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.428 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.428 1+0 records in 00:15:58.428 1+0 records out 00:15:58.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129802 s, 3.2 MB/s 00:15:58.428 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.428 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:58.428 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.428 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.428 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:58.428 19:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:58.428 19:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:58.428 19:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:58.689 19:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:58.689 19:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:58.689 19:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:58.689 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:58.689 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:58.689 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.689 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.689 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:58.689 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:58.689 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.689 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.689 19:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.689 1+0 records in 00:15:58.689 1+0 records out 00:15:58.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000857886 s, 4.8 MB/s 00:15:58.689 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.689 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:58.689 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.689 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.689 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:58.689 19:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:58.689 19:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:58.689 19:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:58.950 19:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:58.950 19:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:58.950 19:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:58.950 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:58.950 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:58.950 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.950 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.950 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:58.950 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:58.950 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.950 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.950 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.950 1+0 records in 00:15:58.950 1+0 records out 00:15:58.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000893859 s, 4.6 MB/s 00:15:58.950 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.950 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:58.950 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.950 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.950 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:58.950 19:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:58.950 19:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:58.950 19:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:59.211 19:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:59.211 19:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:59.211 19:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:59.211 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:59.211 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:59.211 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:59.211 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:59.211 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:59.211 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:59.211 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:59.211 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:59.211 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.211 1+0 records in 00:15:59.211 1+0 records out 00:15:59.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118924 s, 3.4 MB/s 00:15:59.211 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.211 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:59.211 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.211 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:59.211 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:59.211 19:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:59.211 19:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:59.211 19:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:59.472 19:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:59.472 19:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:59.472 19:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:59.472 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:59.472 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:59.472 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:59.472 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:59.472 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:59.472 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:59.472 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:59.472 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:59.472 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.472 1+0 records in 00:15:59.472 1+0 records out 00:15:59.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00133914 s, 3.1 MB/s 00:15:59.472 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.472 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:59.472 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.472 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:59.472 19:17:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:59.472 19:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:59.472 19:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:59.472 19:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:59.734 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:59.734 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:59.734 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:59.734 19:17:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:59.734 19:17:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:59.734 19:17:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:59.734 19:17:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:59.734 19:17:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:59.734 19:17:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:59.734 19:17:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:59.734 19:17:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:59.734 19:17:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.734 1+0 records in 00:15:59.734 1+0 records out 00:15:59.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100308 s, 4.1 MB/s 00:15:59.734 19:17:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.734 19:17:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:59.734 19:17:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.734 19:17:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:59.734 19:17:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:59.734 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:59.734 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:59.734 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:59.995 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:59.995 { 00:15:59.995 "nbd_device": "/dev/nbd0", 00:15:59.995 "bdev_name": "nvme0n1" 00:15:59.995 }, 00:15:59.995 { 00:15:59.995 "nbd_device": "/dev/nbd1", 00:15:59.995 "bdev_name": "nvme0n2" 00:15:59.995 }, 00:15:59.995 { 00:15:59.995 "nbd_device": "/dev/nbd2", 00:15:59.995 "bdev_name": "nvme0n3" 00:15:59.995 }, 00:15:59.995 { 00:15:59.995 "nbd_device": "/dev/nbd3", 00:15:59.995 "bdev_name": "nvme1n1" 00:15:59.995 }, 00:15:59.995 { 00:15:59.995 "nbd_device": "/dev/nbd4", 00:15:59.995 "bdev_name": "nvme2n1" 00:15:59.995 }, 00:15:59.995 { 00:15:59.995 "nbd_device": "/dev/nbd5", 00:15:59.995 "bdev_name": "nvme3n1" 00:15:59.995 } 00:15:59.995 ]' 00:15:59.995 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:59.995 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:59.995 { 00:15:59.995 "nbd_device": "/dev/nbd0", 00:15:59.995 "bdev_name": "nvme0n1" 00:15:59.995 }, 00:15:59.995 { 00:15:59.995 "nbd_device": "/dev/nbd1", 00:15:59.995 "bdev_name": "nvme0n2" 00:15:59.995 }, 00:15:59.995 { 00:15:59.995 "nbd_device": "/dev/nbd2", 00:15:59.995 "bdev_name": "nvme0n3" 00:15:59.995 }, 00:15:59.995 { 00:15:59.995 "nbd_device": "/dev/nbd3", 00:15:59.995 "bdev_name": "nvme1n1" 00:15:59.995 }, 00:15:59.995 { 00:15:59.995 "nbd_device": "/dev/nbd4", 00:15:59.995 "bdev_name": "nvme2n1" 00:15:59.995 }, 00:15:59.995 { 00:15:59.995 "nbd_device": "/dev/nbd5", 00:15:59.995 "bdev_name": "nvme3n1" 00:15:59.995 } 00:15:59.995 ]' 00:15:59.995 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:59.995 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:59.995 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:59.995 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:59.995 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:59.995 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:59.995 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.995 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:00.257 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:00.257 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:00.257 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:00.257 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.257 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.257 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:00.257 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:00.257 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.257 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.257 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:00.518 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:00.519 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:00.519 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:00.519 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.519 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.519 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:00.519 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:00.519 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.519 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.519 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:16:00.780 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:16:00.780 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:16:00.780 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:16:00.780 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.780 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.780 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:16:00.780 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:00.780 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.780 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.780 19:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:16:01.040 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:16:01.040 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:16:01.040 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:16:01.040 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:01.040 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:01.040 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:16:01.040 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:01.040 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:01.040 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:01.040 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:16:01.302 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:16:01.302 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:16:01.302 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:16:01.302 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:01.302 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:01.302 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:16:01.302 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:01.302 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:01.302 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:01.302 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:16:01.302 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:16:01.302 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:16:01.302 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:16:01.302 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:01.302 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:01.302 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:16:01.302 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:01.302 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:01.302 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:01.302 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:01.302 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:01.564 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:01.565 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:01.565 19:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:16:01.825 /dev/nbd0 00:16:01.826 19:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:01.826 19:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:01.826 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:01.826 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:01.826 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:01.826 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:01.826 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:01.826 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:01.826 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:01.826 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:01.826 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.826 1+0 records in 00:16:01.826 1+0 records out 00:16:01.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440109 s, 9.3 MB/s 00:16:01.826 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.826 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:01.826 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.826 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:01.826 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:01.826 19:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:01.826 19:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:01.826 19:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:16:02.086 /dev/nbd1 00:16:02.086 19:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:02.086 19:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:02.086 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:02.086 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:02.086 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:02.086 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:02.086 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:02.086 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:02.086 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:02.086 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:02.087 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.087 1+0 records in 00:16:02.087 1+0 records out 00:16:02.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553134 s, 7.4 MB/s 00:16:02.087 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.087 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:02.087 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.087 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:02.087 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:02.087 19:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.087 19:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:02.087 19:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:16:02.348 /dev/nbd10 00:16:02.348 19:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:16:02.348 19:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:16:02.348 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:16:02.348 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:02.348 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:02.348 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:02.348 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:16:02.348 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:02.348 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:02.348 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:02.348 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.348 1+0 records in 00:16:02.348 1+0 records out 00:16:02.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402411 s, 10.2 MB/s 00:16:02.348 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.348 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:02.348 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.348 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:02.348 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:02.348 19:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.348 19:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:02.348 19:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:16:02.609 /dev/nbd11 00:16:02.609 19:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:16:02.609 19:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:16:02.609 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:16:02.609 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:02.609 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:02.609 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:02.609 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:16:02.609 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:02.609 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:02.609 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:02.609 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.609 1+0 records in 00:16:02.609 1+0 records out 00:16:02.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460573 s, 8.9 MB/s 00:16:02.609 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.609 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:02.609 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.609 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:02.609 19:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:02.609 19:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.609 19:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:02.610 19:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:16:02.871 /dev/nbd12 00:16:02.871 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:16:02.871 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:16:02.871 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:16:02.871 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:02.871 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:02.871 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:02.871 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:16:02.871 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:02.871 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:02.871 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:02.871 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.871 1+0 records in 00:16:02.871 1+0 records out 00:16:02.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000605167 s, 6.8 MB/s 00:16:02.871 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.871 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:02.871 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.871 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:02.871 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:02.871 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.871 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:02.871 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:16:03.133 /dev/nbd13 00:16:03.133 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:16:03.133 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:16:03.133 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:16:03.133 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:03.133 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:03.133 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:03.133 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:16:03.133 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:03.133 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:03.133 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:03.133 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:03.133 1+0 records in 00:16:03.133 1+0 records out 00:16:03.133 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000837938 s, 4.9 MB/s 00:16:03.133 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.133 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:03.133 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.133 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:03.133 19:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:03.133 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:03.133 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:03.133 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:03.133 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:03.133 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:03.394 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:03.394 { 00:16:03.394 "nbd_device": "/dev/nbd0", 00:16:03.394 "bdev_name": "nvme0n1" 00:16:03.394 }, 00:16:03.394 { 00:16:03.394 "nbd_device": "/dev/nbd1", 00:16:03.394 "bdev_name": "nvme0n2" 00:16:03.394 }, 00:16:03.394 { 00:16:03.394 "nbd_device": "/dev/nbd10", 00:16:03.394 "bdev_name": "nvme0n3" 00:16:03.394 }, 00:16:03.394 { 00:16:03.394 "nbd_device": "/dev/nbd11", 00:16:03.394 "bdev_name": "nvme1n1" 00:16:03.394 }, 00:16:03.394 { 00:16:03.394 "nbd_device": "/dev/nbd12", 00:16:03.394 "bdev_name": "nvme2n1" 00:16:03.394 }, 00:16:03.394 { 00:16:03.394 "nbd_device": "/dev/nbd13", 00:16:03.394 "bdev_name": "nvme3n1" 00:16:03.394 } 00:16:03.394 ]' 00:16:03.394 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:03.394 { 00:16:03.394 "nbd_device": "/dev/nbd0", 00:16:03.394 "bdev_name": "nvme0n1" 00:16:03.394 }, 00:16:03.394 { 00:16:03.394 "nbd_device": "/dev/nbd1", 00:16:03.394 "bdev_name": "nvme0n2" 00:16:03.394 }, 00:16:03.394 { 00:16:03.394 "nbd_device": "/dev/nbd10", 00:16:03.394 "bdev_name": "nvme0n3" 00:16:03.394 }, 00:16:03.394 { 00:16:03.394 "nbd_device": "/dev/nbd11", 00:16:03.394 "bdev_name": "nvme1n1" 00:16:03.394 }, 00:16:03.394 { 00:16:03.394 "nbd_device": "/dev/nbd12", 00:16:03.394 "bdev_name": "nvme2n1" 00:16:03.394 }, 00:16:03.394 { 00:16:03.394 "nbd_device": "/dev/nbd13", 00:16:03.394 "bdev_name": "nvme3n1" 00:16:03.394 } 00:16:03.394 ]' 00:16:03.394 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:03.394 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:03.394 /dev/nbd1 00:16:03.394 /dev/nbd10 00:16:03.394 /dev/nbd11 00:16:03.394 /dev/nbd12 00:16:03.394 /dev/nbd13' 00:16:03.394 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:03.394 /dev/nbd1 00:16:03.394 /dev/nbd10 00:16:03.394 /dev/nbd11 00:16:03.394 /dev/nbd12 00:16:03.394 /dev/nbd13' 00:16:03.394 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:03.394 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:16:03.394 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:16:03.394 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:16:03.394 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:16:03.394 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:16:03.394 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:03.394 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:03.394 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:03.394 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:03.394 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:03.394 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:03.394 256+0 records in 00:16:03.394 256+0 records out 00:16:03.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121103 s, 86.6 MB/s 00:16:03.394 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:03.394 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:03.656 256+0 records in 00:16:03.656 256+0 records out 00:16:03.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.239673 s, 4.4 MB/s 00:16:03.656 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:03.656 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:03.656 256+0 records in 00:16:03.656 256+0 records out 00:16:03.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0871549 s, 12.0 MB/s 00:16:03.656 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:03.656 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:03.656 256+0 records in 00:16:03.656 256+0 records out 00:16:03.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0670105 s, 15.6 MB/s 00:16:03.656 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:03.656 19:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:03.917 256+0 records in 00:16:03.917 256+0 records out 00:16:03.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.108441 s, 9.7 MB/s 00:16:03.917 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:03.917 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:04.178 256+0 records in 00:16:04.178 256+0 records out 00:16:04.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.211701 s, 5.0 MB/s 00:16:04.178 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:04.178 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:04.178 256+0 records in 00:16:04.178 256+0 records out 00:16:04.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.225871 s, 4.6 MB/s 00:16:04.178 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:16:04.178 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:04.178 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:04.178 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:04.178 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:04.178 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:04.178 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:04.178 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.178 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:04.439 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.439 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:16:04.439 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.439 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:16:04.439 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.439 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:16:04.439 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.439 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:16:04.439 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.439 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:16:04.439 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:04.439 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:04.439 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:04.439 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:04.439 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:04.439 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:04.439 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.439 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:04.700 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:04.701 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:04.701 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:04.701 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.701 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.701 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:04.701 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:04.701 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.701 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.701 19:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:04.701 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:04.701 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:04.701 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:04.701 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.701 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.701 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:04.701 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:04.701 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.701 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.701 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:16:04.962 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:16:04.962 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:16:04.962 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:16:04.962 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.962 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.962 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:16:04.962 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:04.962 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.962 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.962 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:16:05.223 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:16:05.223 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:16:05.223 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:16:05.223 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.223 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.223 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:16:05.223 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:05.223 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.223 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.223 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:16:05.483 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:16:05.483 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:16:05.483 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:16:05.483 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.483 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.483 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:16:05.483 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:05.483 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.483 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.483 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:16:05.744 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:16:05.744 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:16:05.744 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:16:05.744 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.744 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.744 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:16:05.744 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:05.744 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.744 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:05.744 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:05.744 19:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:06.005 19:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:06.005 19:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:06.005 19:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:06.005 19:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:06.005 19:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:06.005 19:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:06.005 19:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:06.005 19:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:06.005 19:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:06.005 19:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:06.005 19:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:06.005 19:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:06.005 19:17:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:06.005 19:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:06.005 19:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:06.005 19:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:06.266 malloc_lvol_verify 00:16:06.266 19:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:06.527 0c0e3cc6-d41e-49ab-83fb-a82d3a408618 00:16:06.527 19:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:06.789 9a08488c-1ec5-41b6-888c-27c8b5dbb28f 00:16:06.789 19:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:06.789 /dev/nbd0 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:07.051 mke2fs 1.47.0 (5-Feb-2023) 00:16:07.051 Discarding device blocks: 0/4096 done 00:16:07.051 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:07.051 00:16:07.051 Allocating group tables: 0/1 done 00:16:07.051 Writing inode tables: 0/1 done 00:16:07.051 Creating journal (1024 blocks): done 00:16:07.051 Writing superblocks and filesystem accounting information: 0/1 done 00:16:07.051 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74185 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74185 ']' 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74185 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:07.051 19:17:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74185 00:16:07.313 19:17:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:07.313 19:17:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:07.313 killing process with pid 74185 00:16:07.313 19:17:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74185' 00:16:07.313 19:17:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74185 00:16:07.313 19:17:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74185 00:16:07.886 19:17:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:07.886 00:16:07.886 real 0m10.603s 00:16:07.886 user 0m14.460s 00:16:07.886 sys 0m3.732s 00:16:07.886 ************************************ 00:16:07.886 END TEST bdev_nbd 00:16:07.886 ************************************ 00:16:07.886 19:17:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.886 19:17:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:07.886 19:17:52 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:16:07.886 19:17:52 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:16:07.886 19:17:52 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:16:07.886 19:17:52 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:16:07.886 19:17:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:07.886 19:17:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:07.886 19:17:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:07.886 ************************************ 00:16:07.886 START TEST bdev_fio 00:16:07.886 ************************************ 00:16:07.886 19:17:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:16:07.886 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:07.886 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:07.886 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:07.886 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:07.886 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:07.886 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:07.886 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:07.886 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:07.886 19:17:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:07.886 19:17:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:16:07.886 19:17:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:16:07.886 19:17:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:07.886 19:17:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:07.886 19:17:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:07.886 19:17:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:16:07.886 19:17:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:07.886 19:17:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:07.886 19:17:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:07.886 19:17:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:16:07.886 19:17:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:16:07.886 19:17:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:16:07.886 19:17:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:16:08.147 19:17:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:08.147 19:17:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:16:08.147 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:08.147 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:16:08.147 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:16:08.147 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:08.147 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:16:08.147 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:16:08.147 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:08.147 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:16:08.147 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:16:08.147 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:08.147 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:16:08.147 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:16:08.147 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:08.147 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:16:08.147 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:16:08.147 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:08.147 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:16:08.147 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:16:08.148 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:08.148 19:17:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:08.148 19:17:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:16:08.148 19:17:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.148 19:17:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:08.148 ************************************ 00:16:08.148 START TEST bdev_fio_rw_verify 00:16:08.148 ************************************ 00:16:08.148 19:17:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:08.148 19:17:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:08.148 19:17:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:08.148 19:17:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:08.148 19:17:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:08.148 19:17:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:08.148 19:17:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:16:08.148 19:17:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:08.148 19:17:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:08.148 19:17:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:16:08.148 19:17:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:08.148 19:17:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:08.148 19:17:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:08.148 19:17:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:08.148 19:17:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:16:08.148 19:17:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:08.148 19:17:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:08.148 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:08.148 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:08.148 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:08.148 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:08.148 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:08.148 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:08.148 fio-3.35 00:16:08.148 Starting 6 threads 00:16:20.385 00:16:20.386 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74604: Mon Dec 16 19:18:04 2024 00:16:20.386 read: IOPS=14.2k, BW=55.3MiB/s (58.0MB/s)(553MiB/10002msec) 00:16:20.386 slat (usec): min=2, max=1971, avg= 6.90, stdev=15.94 00:16:20.386 clat (usec): min=75, max=7713, avg=1340.21, stdev=791.45 00:16:20.386 lat (usec): min=78, max=7728, avg=1347.11, stdev=792.11 00:16:20.386 clat percentiles (usec): 00:16:20.386 | 50.000th=[ 1237], 99.000th=[ 3720], 99.900th=[ 5014], 99.990th=[ 7504], 00:16:20.386 | 99.999th=[ 7701] 00:16:20.386 write: IOPS=14.5k, BW=56.8MiB/s (59.5MB/s)(568MiB/10002msec); 0 zone resets 00:16:20.386 slat (usec): min=13, max=3631, avg=44.04, stdev=148.84 00:16:20.386 clat (usec): min=88, max=8198, avg=1656.22, stdev=879.62 00:16:20.386 lat (usec): min=106, max=8465, avg=1700.26, stdev=892.87 00:16:20.386 clat percentiles (usec): 00:16:20.386 | 50.000th=[ 1532], 99.000th=[ 4359], 99.900th=[ 5800], 99.990th=[ 6849], 00:16:20.386 | 99.999th=[ 8160] 00:16:20.386 bw ( KiB/s): min=49002, max=80602, per=100.00%, avg=58422.74, stdev=1627.33, samples=114 00:16:20.386 iops : min=12246, max=20150, avg=14604.84, stdev=406.84, samples=114 00:16:20.386 lat (usec) : 100=0.01%, 250=2.33%, 500=7.38%, 750=9.30%, 1000=11.27% 00:16:20.386 lat (msec) : 2=46.28%, 4=22.24%, 10=1.20% 00:16:20.386 cpu : usr=41.49%, sys=33.79%, ctx=5216, majf=0, minf=14520 00:16:20.386 IO depths : 1=11.2%, 2=23.6%, 4=51.3%, 8=13.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.386 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.386 issued rwts: total=141547,145349,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.386 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:20.386 00:16:20.386 Run status group 0 (all jobs): 00:16:20.386 READ: bw=55.3MiB/s (58.0MB/s), 55.3MiB/s-55.3MiB/s (58.0MB/s-58.0MB/s), io=553MiB (580MB), run=10002-10002msec 00:16:20.386 WRITE: bw=56.8MiB/s (59.5MB/s), 56.8MiB/s-56.8MiB/s (59.5MB/s-59.5MB/s), io=568MiB (595MB), run=10002-10002msec 00:16:20.958 ----------------------------------------------------- 00:16:20.958 Suppressions used: 00:16:20.958 count bytes template 00:16:20.958 6 48 /usr/src/fio/parse.c 00:16:20.958 3712 356352 /usr/src/fio/iolog.c 00:16:20.958 1 8 libtcmalloc_minimal.so 00:16:20.958 1 904 libcrypto.so 00:16:20.958 ----------------------------------------------------- 00:16:20.958 00:16:20.958 00:16:20.958 real 0m12.983s 00:16:20.958 user 0m26.375s 00:16:20.958 sys 0m20.576s 00:16:20.958 19:18:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.958 ************************************ 00:16:20.958 END TEST bdev_fio_rw_verify 00:16:20.958 ************************************ 00:16:20.958 19:18:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:20.958 19:18:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:20.958 19:18:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:21.219 19:18:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:21.219 19:18:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:21.219 19:18:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:21.219 19:18:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:21.219 19:18:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:21.219 19:18:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:21.219 19:18:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:21.219 19:18:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:21.219 19:18:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:21.219 19:18:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:21.219 19:18:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:21.219 19:18:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:21.219 19:18:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:21.219 19:18:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:21.219 19:18:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:21.220 19:18:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "6130f36d-8392-4a2b-856c-eea1a06bd9ae"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6130f36d-8392-4a2b-856c-eea1a06bd9ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "f54b085b-9f53-4160-b015-61befe715143"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f54b085b-9f53-4160-b015-61befe715143",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "24bc4445-df25-4b0f-a60a-981c5a9a388c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "24bc4445-df25-4b0f-a60a-981c5a9a388c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "8addc0d0-015d-4467-b8f2-8fca403885d2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8addc0d0-015d-4467-b8f2-8fca403885d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "61c3f9b4-a7cf-4830-8d99-4bd23bf53ab9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "61c3f9b4-a7cf-4830-8d99-4bd23bf53ab9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "bac8260f-df62-4ff9-b9c0-cf4b191a5915"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "bac8260f-df62-4ff9-b9c0-cf4b191a5915",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:21.220 19:18:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:21.220 19:18:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:21.220 /home/vagrant/spdk_repo/spdk 00:16:21.220 19:18:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:21.220 19:18:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:21.220 19:18:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:21.220 00:16:21.220 real 0m13.143s 00:16:21.220 user 0m26.450s 00:16:21.220 sys 0m20.641s 00:16:21.220 19:18:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:21.220 ************************************ 00:16:21.220 END TEST bdev_fio 00:16:21.220 ************************************ 00:16:21.220 19:18:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:21.220 19:18:05 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:21.220 19:18:05 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:21.220 19:18:05 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:21.220 19:18:05 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:21.220 19:18:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:21.220 ************************************ 00:16:21.220 START TEST bdev_verify 00:16:21.220 ************************************ 00:16:21.220 19:18:05 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:21.220 [2024-12-16 19:18:05.484272] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:21.220 [2024-12-16 19:18:05.484416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74781 ] 00:16:21.480 [2024-12-16 19:18:05.649826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:21.480 [2024-12-16 19:18:05.773964] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.480 [2024-12-16 19:18:05.774070] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.065 Running I/O for 5 seconds... 00:16:24.400 24512.00 IOPS, 95.75 MiB/s [2024-12-16T19:18:09.698Z] 24031.50 IOPS, 93.87 MiB/s [2024-12-16T19:18:10.643Z] 24074.33 IOPS, 94.04 MiB/s [2024-12-16T19:18:11.587Z] 23471.75 IOPS, 91.69 MiB/s [2024-12-16T19:18:11.587Z] 23052.60 IOPS, 90.05 MiB/s 00:16:27.233 Latency(us) 00:16:27.233 [2024-12-16T19:18:11.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.233 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:27.233 Verification LBA range: start 0x0 length 0x80000 00:16:27.233 nvme0n1 : 5.07 1841.76 7.19 0.00 0.00 69382.71 11645.24 79449.80 00:16:27.233 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:27.233 Verification LBA range: start 0x80000 length 0x80000 00:16:27.233 nvme0n1 : 5.03 1731.40 6.76 0.00 0.00 73788.84 11241.94 64124.46 00:16:27.233 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:27.233 Verification LBA range: start 0x0 length 0x80000 00:16:27.233 nvme0n2 : 5.04 1828.79 7.14 0.00 0.00 69766.80 9729.58 80256.39 00:16:27.233 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:27.233 Verification LBA range: start 0x80000 length 0x80000 00:16:27.233 nvme0n2 : 5.05 1749.35 6.83 0.00 0.00 72864.21 16837.71 64931.05 00:16:27.233 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:27.233 Verification LBA range: start 0x0 length 0x80000 00:16:27.233 nvme0n3 : 5.07 1817.44 7.10 0.00 0.00 70089.55 6654.42 68560.74 00:16:27.233 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:27.233 Verification LBA range: start 0x80000 length 0x80000 00:16:27.233 nvme0n3 : 5.07 1740.76 6.80 0.00 0.00 73053.99 12250.19 66140.95 00:16:27.233 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:27.233 Verification LBA range: start 0x0 length 0x20000 00:16:27.233 nvme1n1 : 5.05 1825.64 7.13 0.00 0.00 69660.29 11998.13 77836.60 00:16:27.233 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:27.233 Verification LBA range: start 0x20000 length 0x20000 00:16:27.233 nvme1n1 : 5.09 1760.57 6.88 0.00 0.00 72080.02 6906.49 67350.84 00:16:27.233 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:27.233 Verification LBA range: start 0x0 length 0xbd0bd 00:16:27.233 nvme2n1 : 5.09 2358.20 9.21 0.00 0.00 53704.72 6200.71 58478.28 00:16:27.233 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:27.233 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:27.233 nvme2n1 : 5.10 2428.20 9.49 0.00 0.00 52143.64 7108.14 63317.86 00:16:27.233 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:27.233 Verification LBA range: start 0x0 length 0xa0000 00:16:27.233 nvme3n1 : 5.09 1885.74 7.37 0.00 0.00 67239.53 5494.94 72593.72 00:16:27.233 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:27.233 Verification LBA range: start 0xa0000 length 0xa0000 00:16:27.233 nvme3n1 : 5.10 1781.56 6.96 0.00 0.00 71050.56 5545.35 64931.05 00:16:27.233 [2024-12-16T19:18:11.587Z] =================================================================================================================== 00:16:27.233 [2024-12-16T19:18:11.587Z] Total : 22749.42 88.86 0.00 0.00 67053.93 5494.94 80256.39 00:16:28.177 00:16:28.177 real 0m6.776s 00:16:28.177 user 0m10.856s 00:16:28.177 sys 0m1.558s 00:16:28.177 ************************************ 00:16:28.177 END TEST bdev_verify 00:16:28.177 ************************************ 00:16:28.177 19:18:12 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.177 19:18:12 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:28.177 19:18:12 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:28.177 19:18:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:28.177 19:18:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.177 19:18:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:28.177 ************************************ 00:16:28.177 START TEST bdev_verify_big_io 00:16:28.177 ************************************ 00:16:28.177 19:18:12 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:28.177 [2024-12-16 19:18:12.326830] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:28.177 [2024-12-16 19:18:12.326978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74876 ] 00:16:28.177 [2024-12-16 19:18:12.487604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:28.438 [2024-12-16 19:18:12.610712] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.438 [2024-12-16 19:18:12.610822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.010 Running I/O for 5 seconds... 00:16:32.500 960.00 IOPS, 60.00 MiB/s [2024-12-16T19:18:19.399Z] 1437.00 IOPS, 89.81 MiB/s [2024-12-16T19:18:19.399Z] 2485.00 IOPS, 155.31 MiB/s 00:16:35.045 Latency(us) 00:16:35.045 [2024-12-16T19:18:19.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.045 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:35.045 Verification LBA range: start 0x0 length 0x8000 00:16:35.045 nvme0n1 : 5.68 132.40 8.28 0.00 0.00 939293.78 28230.89 1013085.74 00:16:35.045 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:35.045 Verification LBA range: start 0x8000 length 0x8000 00:16:35.045 nvme0n1 : 5.87 138.97 8.69 0.00 0.00 884758.24 156479.80 1238932.87 00:16:35.045 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:35.045 Verification LBA range: start 0x0 length 0x8000 00:16:35.045 nvme0n2 : 5.78 121.72 7.61 0.00 0.00 952108.04 66140.95 1161499.57 00:16:35.045 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:35.045 Verification LBA range: start 0x8000 length 0x8000 00:16:35.045 nvme0n2 : 5.87 141.66 8.85 0.00 0.00 842864.22 120989.54 803370.54 00:16:35.045 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:35.045 Verification LBA range: start 0x0 length 0x8000 00:16:35.045 nvme0n3 : 5.79 127.21 7.95 0.00 0.00 919937.76 84692.68 1529307.77 00:16:35.045 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:35.045 Verification LBA range: start 0x8000 length 0x8000 00:16:35.045 nvme0n3 : 5.59 146.00 9.12 0.00 0.00 799916.10 5595.77 806596.92 00:16:35.045 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:35.045 Verification LBA range: start 0x0 length 0x2000 00:16:35.045 nvme1n1 : 5.79 135.48 8.47 0.00 0.00 824617.20 8318.03 1613193.85 00:16:35.045 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:35.045 Verification LBA range: start 0x2000 length 0x2000 00:16:35.045 nvme1n1 : 5.88 138.19 8.64 0.00 0.00 834974.99 6856.07 1780966.01 00:16:35.045 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:35.045 Verification LBA range: start 0x0 length 0xbd0b 00:16:35.045 nvme2n1 : 5.87 178.97 11.19 0.00 0.00 620689.02 3377.62 974369.08 00:16:35.045 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:35.045 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:35.045 nvme2n1 : 5.89 163.09 10.19 0.00 0.00 686605.91 3062.55 1116330.14 00:16:35.045 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:35.045 Verification LBA range: start 0x0 length 0xa000 00:16:35.045 nvme3n1 : 5.87 163.43 10.21 0.00 0.00 661740.05 1789.64 1019538.51 00:16:35.045 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:35.045 Verification LBA range: start 0xa000 length 0xa000 00:16:35.045 nvme3n1 : 5.89 135.94 8.50 0.00 0.00 798676.57 5066.44 1367988.38 00:16:35.045 [2024-12-16T19:18:19.399Z] =================================================================================================================== 00:16:35.045 [2024-12-16T19:18:19.399Z] Total : 1723.06 107.69 0.00 0.00 802162.61 1789.64 1780966.01 00:16:35.616 00:16:35.616 real 0m7.487s 00:16:35.616 user 0m13.711s 00:16:35.616 sys 0m0.462s 00:16:35.616 ************************************ 00:16:35.616 END TEST bdev_verify_big_io 00:16:35.616 ************************************ 00:16:35.616 19:18:19 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.616 19:18:19 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.616 19:18:19 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:35.616 19:18:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:35.616 19:18:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.616 19:18:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:35.616 ************************************ 00:16:35.616 START TEST bdev_write_zeroes 00:16:35.616 ************************************ 00:16:35.616 19:18:19 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:35.616 [2024-12-16 19:18:19.865158] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:35.616 [2024-12-16 19:18:19.865293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74981 ] 00:16:35.876 [2024-12-16 19:18:20.036637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.876 [2024-12-16 19:18:20.112255] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.137 Running I/O for 1 seconds... 00:16:37.340 79680.00 IOPS, 311.25 MiB/s 00:16:37.340 Latency(us) 00:16:37.340 [2024-12-16T19:18:21.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.340 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:37.340 nvme0n1 : 1.01 12370.60 48.32 0.00 0.00 10338.24 6125.10 22685.54 00:16:37.340 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:37.340 nvme0n2 : 1.02 12356.00 48.27 0.00 0.00 10344.47 6150.30 21979.77 00:16:37.340 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:37.340 nvme0n3 : 1.02 12341.29 48.21 0.00 0.00 10351.04 6175.51 21273.99 00:16:37.340 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:37.340 nvme1n1 : 1.02 12327.56 48.15 0.00 0.00 10356.78 6251.13 20568.22 00:16:37.340 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:37.340 nvme2n1 : 1.03 17293.57 67.55 0.00 0.00 7376.11 3213.78 16434.41 00:16:37.340 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:37.340 nvme3n1 : 1.02 12437.74 48.58 0.00 0.00 10216.70 3856.54 21778.12 00:16:37.340 [2024-12-16T19:18:21.694Z] =================================================================================================================== 00:16:37.340 [2024-12-16T19:18:21.694Z] Total : 79126.77 309.09 0.00 0.00 9673.26 3213.78 22685.54 00:16:37.912 00:16:37.912 real 0m2.455s 00:16:37.912 user 0m1.760s 00:16:37.912 sys 0m0.527s 00:16:37.912 19:18:22 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.912 ************************************ 00:16:37.912 19:18:22 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:37.912 END TEST bdev_write_zeroes 00:16:37.912 ************************************ 00:16:38.173 19:18:22 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:38.173 19:18:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:38.173 19:18:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.174 19:18:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:38.174 ************************************ 00:16:38.174 START TEST bdev_json_nonenclosed 00:16:38.174 ************************************ 00:16:38.174 19:18:22 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:38.174 [2024-12-16 19:18:22.404878] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:38.174 [2024-12-16 19:18:22.405040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75028 ] 00:16:38.435 [2024-12-16 19:18:22.574071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.435 [2024-12-16 19:18:22.692409] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.435 [2024-12-16 19:18:22.692514] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:38.435 [2024-12-16 19:18:22.692534] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:38.435 [2024-12-16 19:18:22.692545] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:38.696 00:16:38.696 real 0m0.568s 00:16:38.696 user 0m0.343s 00:16:38.696 sys 0m0.117s 00:16:38.696 ************************************ 00:16:38.696 END TEST bdev_json_nonenclosed 00:16:38.696 ************************************ 00:16:38.696 19:18:22 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.696 19:18:22 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:38.696 19:18:22 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:38.696 19:18:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:38.696 19:18:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.696 19:18:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:38.696 ************************************ 00:16:38.696 START TEST bdev_json_nonarray 00:16:38.696 ************************************ 00:16:38.696 19:18:22 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:38.696 [2024-12-16 19:18:23.032047] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:38.696 [2024-12-16 19:18:23.032213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75054 ] 00:16:38.957 [2024-12-16 19:18:23.197479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.218 [2024-12-16 19:18:23.320494] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.218 [2024-12-16 19:18:23.320603] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:39.218 [2024-12-16 19:18:23.320623] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:39.218 [2024-12-16 19:18:23.320634] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:39.218 00:16:39.218 real 0m0.559s 00:16:39.218 user 0m0.325s 00:16:39.218 sys 0m0.128s 00:16:39.218 19:18:23 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.218 ************************************ 00:16:39.218 END TEST bdev_json_nonarray 00:16:39.218 ************************************ 00:16:39.218 19:18:23 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:39.479 19:18:23 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:39.479 19:18:23 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:39.479 19:18:23 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:39.479 19:18:23 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:39.479 19:18:23 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:39.479 19:18:23 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:39.479 19:18:23 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:39.479 19:18:23 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:39.479 19:18:23 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:39.479 19:18:23 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:39.479 19:18:23 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:39.479 19:18:23 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:39.740 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:42.283 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:42.283 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:42.283 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:42.544 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:42.544 00:16:42.544 real 0m53.537s 00:16:42.544 user 1m20.008s 00:16:42.544 sys 0m31.873s 00:16:42.544 19:18:26 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:42.544 ************************************ 00:16:42.544 END TEST blockdev_xnvme 00:16:42.544 ************************************ 00:16:42.544 19:18:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:42.544 19:18:26 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:42.544 19:18:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:42.544 19:18:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:42.544 19:18:26 -- common/autotest_common.sh@10 -- # set +x 00:16:42.544 ************************************ 00:16:42.544 START TEST ublk 00:16:42.544 ************************************ 00:16:42.544 19:18:26 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:42.806 * Looking for test storage... 00:16:42.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:42.806 19:18:26 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:42.806 19:18:26 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:42.806 19:18:26 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:16:42.806 19:18:26 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:42.806 19:18:26 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:42.806 19:18:26 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:42.806 19:18:26 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:42.806 19:18:26 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:42.806 19:18:26 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:42.806 19:18:26 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:42.806 19:18:26 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:42.806 19:18:26 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:42.806 19:18:26 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:42.806 19:18:26 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:42.806 19:18:26 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:42.806 19:18:26 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:42.806 19:18:26 ublk -- scripts/common.sh@345 -- # : 1 00:16:42.806 19:18:26 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:42.806 19:18:26 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:42.806 19:18:26 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:42.806 19:18:26 ublk -- scripts/common.sh@353 -- # local d=1 00:16:42.806 19:18:26 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:42.806 19:18:26 ublk -- scripts/common.sh@355 -- # echo 1 00:16:42.806 19:18:26 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:42.806 19:18:26 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:42.806 19:18:26 ublk -- scripts/common.sh@353 -- # local d=2 00:16:42.806 19:18:26 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:42.806 19:18:26 ublk -- scripts/common.sh@355 -- # echo 2 00:16:42.806 19:18:26 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:42.806 19:18:26 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:42.806 19:18:26 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:42.806 19:18:26 ublk -- scripts/common.sh@368 -- # return 0 00:16:42.806 19:18:26 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:42.806 19:18:26 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:42.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.806 --rc genhtml_branch_coverage=1 00:16:42.806 --rc genhtml_function_coverage=1 00:16:42.806 --rc genhtml_legend=1 00:16:42.806 --rc geninfo_all_blocks=1 00:16:42.806 --rc geninfo_unexecuted_blocks=1 00:16:42.806 00:16:42.806 ' 00:16:42.806 19:18:26 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:42.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.806 --rc genhtml_branch_coverage=1 00:16:42.806 --rc genhtml_function_coverage=1 00:16:42.806 --rc genhtml_legend=1 00:16:42.806 --rc geninfo_all_blocks=1 00:16:42.806 --rc geninfo_unexecuted_blocks=1 00:16:42.806 00:16:42.806 ' 00:16:42.806 19:18:26 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:42.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.806 --rc genhtml_branch_coverage=1 00:16:42.806 --rc genhtml_function_coverage=1 00:16:42.806 --rc genhtml_legend=1 00:16:42.806 --rc geninfo_all_blocks=1 00:16:42.806 --rc geninfo_unexecuted_blocks=1 00:16:42.806 00:16:42.806 ' 00:16:42.806 19:18:26 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:42.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.806 --rc genhtml_branch_coverage=1 00:16:42.806 --rc genhtml_function_coverage=1 00:16:42.806 --rc genhtml_legend=1 00:16:42.806 --rc geninfo_all_blocks=1 00:16:42.806 --rc geninfo_unexecuted_blocks=1 00:16:42.806 00:16:42.806 ' 00:16:42.806 19:18:26 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:42.806 19:18:26 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:42.806 19:18:26 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:42.806 19:18:26 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:42.806 19:18:26 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:42.806 19:18:26 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:42.806 19:18:26 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:42.806 19:18:26 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:42.806 19:18:26 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:42.806 19:18:26 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:42.806 19:18:26 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:42.806 19:18:26 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:42.806 19:18:26 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:42.806 19:18:26 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:42.806 19:18:26 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:42.806 19:18:26 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:42.806 19:18:26 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:42.806 19:18:26 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:42.806 19:18:26 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:42.806 19:18:27 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:42.806 19:18:27 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:42.806 19:18:27 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:42.806 19:18:27 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:42.806 ************************************ 00:16:42.806 START TEST test_save_ublk_config 00:16:42.806 ************************************ 00:16:42.806 19:18:27 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:42.806 19:18:27 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:42.806 19:18:27 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75352 00:16:42.806 19:18:27 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:42.806 19:18:27 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75352 00:16:42.806 19:18:27 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:42.806 19:18:27 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75352 ']' 00:16:42.806 19:18:27 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.806 19:18:27 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:42.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.806 19:18:27 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.806 19:18:27 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:42.806 19:18:27 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:42.806 [2024-12-16 19:18:27.136995] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:42.806 [2024-12-16 19:18:27.137151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75352 ] 00:16:43.067 [2024-12-16 19:18:27.305225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.328 [2024-12-16 19:18:27.429081] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.902 19:18:28 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.902 19:18:28 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:43.902 19:18:28 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:43.902 19:18:28 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:43.902 19:18:28 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.902 19:18:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:43.902 [2024-12-16 19:18:28.148199] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:43.902 [2024-12-16 19:18:28.149121] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:43.902 malloc0 00:16:43.902 [2024-12-16 19:18:28.219330] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:43.902 [2024-12-16 19:18:28.219429] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:43.902 [2024-12-16 19:18:28.219441] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:43.902 [2024-12-16 19:18:28.219449] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:43.902 [2024-12-16 19:18:28.228306] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:43.902 [2024-12-16 19:18:28.228333] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:43.902 [2024-12-16 19:18:28.232199] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:43.902 [2024-12-16 19:18:28.232317] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:43.902 [2024-12-16 19:18:28.251207] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:44.163 0 00:16:44.163 19:18:28 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.163 19:18:28 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:44.163 19:18:28 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.163 19:18:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:44.423 19:18:28 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.423 19:18:28 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:44.423 "subsystems": [ 00:16:44.423 { 00:16:44.423 "subsystem": "fsdev", 00:16:44.423 "config": [ 00:16:44.423 { 00:16:44.423 "method": "fsdev_set_opts", 00:16:44.423 "params": { 00:16:44.423 "fsdev_io_pool_size": 65535, 00:16:44.423 "fsdev_io_cache_size": 256 00:16:44.423 } 00:16:44.423 } 00:16:44.423 ] 00:16:44.423 }, 00:16:44.423 { 00:16:44.423 "subsystem": "keyring", 00:16:44.423 "config": [] 00:16:44.423 }, 00:16:44.423 { 00:16:44.423 "subsystem": "iobuf", 00:16:44.423 "config": [ 00:16:44.423 { 00:16:44.423 "method": "iobuf_set_options", 00:16:44.423 "params": { 00:16:44.423 "small_pool_count": 8192, 00:16:44.423 "large_pool_count": 1024, 00:16:44.423 "small_bufsize": 8192, 00:16:44.423 "large_bufsize": 135168, 00:16:44.423 "enable_numa": false 00:16:44.423 } 00:16:44.423 } 00:16:44.423 ] 00:16:44.423 }, 00:16:44.423 { 00:16:44.423 "subsystem": "sock", 00:16:44.423 "config": [ 00:16:44.423 { 00:16:44.423 "method": "sock_set_default_impl", 00:16:44.423 "params": { 00:16:44.423 "impl_name": "posix" 00:16:44.423 } 00:16:44.423 }, 00:16:44.423 { 00:16:44.423 "method": "sock_impl_set_options", 00:16:44.423 "params": { 00:16:44.423 "impl_name": "ssl", 00:16:44.423 "recv_buf_size": 4096, 00:16:44.423 "send_buf_size": 4096, 00:16:44.423 "enable_recv_pipe": true, 00:16:44.423 "enable_quickack": false, 00:16:44.423 "enable_placement_id": 0, 00:16:44.423 "enable_zerocopy_send_server": true, 00:16:44.423 "enable_zerocopy_send_client": false, 00:16:44.423 "zerocopy_threshold": 0, 00:16:44.423 "tls_version": 0, 00:16:44.423 "enable_ktls": false 00:16:44.423 } 00:16:44.423 }, 00:16:44.423 { 00:16:44.423 "method": "sock_impl_set_options", 00:16:44.423 "params": { 00:16:44.423 "impl_name": "posix", 00:16:44.423 "recv_buf_size": 2097152, 00:16:44.423 "send_buf_size": 2097152, 00:16:44.423 "enable_recv_pipe": true, 00:16:44.423 "enable_quickack": false, 00:16:44.423 "enable_placement_id": 0, 00:16:44.423 "enable_zerocopy_send_server": true, 00:16:44.423 "enable_zerocopy_send_client": false, 00:16:44.423 "zerocopy_threshold": 0, 00:16:44.423 "tls_version": 0, 00:16:44.423 "enable_ktls": false 00:16:44.423 } 00:16:44.423 } 00:16:44.423 ] 00:16:44.423 }, 00:16:44.423 { 00:16:44.423 "subsystem": "vmd", 00:16:44.423 "config": [] 00:16:44.423 }, 00:16:44.423 { 00:16:44.423 "subsystem": "accel", 00:16:44.423 "config": [ 00:16:44.423 { 00:16:44.423 "method": "accel_set_options", 00:16:44.423 "params": { 00:16:44.423 "small_cache_size": 128, 00:16:44.423 "large_cache_size": 16, 00:16:44.423 "task_count": 2048, 00:16:44.423 "sequence_count": 2048, 00:16:44.423 "buf_count": 2048 00:16:44.423 } 00:16:44.423 } 00:16:44.423 ] 00:16:44.423 }, 00:16:44.423 { 00:16:44.423 "subsystem": "bdev", 00:16:44.423 "config": [ 00:16:44.423 { 00:16:44.423 "method": "bdev_set_options", 00:16:44.423 "params": { 00:16:44.423 "bdev_io_pool_size": 65535, 00:16:44.423 "bdev_io_cache_size": 256, 00:16:44.423 "bdev_auto_examine": true, 00:16:44.423 "iobuf_small_cache_size": 128, 00:16:44.423 "iobuf_large_cache_size": 16 00:16:44.423 } 00:16:44.423 }, 00:16:44.423 { 00:16:44.423 "method": "bdev_raid_set_options", 00:16:44.423 "params": { 00:16:44.423 "process_window_size_kb": 1024, 00:16:44.423 "process_max_bandwidth_mb_sec": 0 00:16:44.423 } 00:16:44.423 }, 00:16:44.423 { 00:16:44.423 "method": "bdev_iscsi_set_options", 00:16:44.423 "params": { 00:16:44.423 "timeout_sec": 30 00:16:44.423 } 00:16:44.423 }, 00:16:44.424 { 00:16:44.424 "method": "bdev_nvme_set_options", 00:16:44.424 "params": { 00:16:44.424 "action_on_timeout": "none", 00:16:44.424 "timeout_us": 0, 00:16:44.424 "timeout_admin_us": 0, 00:16:44.424 "keep_alive_timeout_ms": 10000, 00:16:44.424 "arbitration_burst": 0, 00:16:44.424 "low_priority_weight": 0, 00:16:44.424 "medium_priority_weight": 0, 00:16:44.424 "high_priority_weight": 0, 00:16:44.424 "nvme_adminq_poll_period_us": 10000, 00:16:44.424 "nvme_ioq_poll_period_us": 0, 00:16:44.424 "io_queue_requests": 0, 00:16:44.424 "delay_cmd_submit": true, 00:16:44.424 "transport_retry_count": 4, 00:16:44.424 "bdev_retry_count": 3, 00:16:44.424 "transport_ack_timeout": 0, 00:16:44.424 "ctrlr_loss_timeout_sec": 0, 00:16:44.424 "reconnect_delay_sec": 0, 00:16:44.424 "fast_io_fail_timeout_sec": 0, 00:16:44.424 "disable_auto_failback": false, 00:16:44.424 "generate_uuids": false, 00:16:44.424 "transport_tos": 0, 00:16:44.424 "nvme_error_stat": false, 00:16:44.424 "rdma_srq_size": 0, 00:16:44.424 "io_path_stat": false, 00:16:44.424 "allow_accel_sequence": false, 00:16:44.424 "rdma_max_cq_size": 0, 00:16:44.424 "rdma_cm_event_timeout_ms": 0, 00:16:44.424 "dhchap_digests": [ 00:16:44.424 "sha256", 00:16:44.424 "sha384", 00:16:44.424 "sha512" 00:16:44.424 ], 00:16:44.424 "dhchap_dhgroups": [ 00:16:44.424 "null", 00:16:44.424 "ffdhe2048", 00:16:44.424 "ffdhe3072", 00:16:44.424 "ffdhe4096", 00:16:44.424 "ffdhe6144", 00:16:44.424 "ffdhe8192" 00:16:44.424 ], 00:16:44.424 "rdma_umr_per_io": false 00:16:44.424 } 00:16:44.424 }, 00:16:44.424 { 00:16:44.424 "method": "bdev_nvme_set_hotplug", 00:16:44.424 "params": { 00:16:44.424 "period_us": 100000, 00:16:44.424 "enable": false 00:16:44.424 } 00:16:44.424 }, 00:16:44.424 { 00:16:44.424 "method": "bdev_malloc_create", 00:16:44.424 "params": { 00:16:44.424 "name": "malloc0", 00:16:44.424 "num_blocks": 8192, 00:16:44.424 "block_size": 4096, 00:16:44.424 "physical_block_size": 4096, 00:16:44.424 "uuid": "f795e15c-8792-491f-aeea-e886afca8849", 00:16:44.424 "optimal_io_boundary": 0, 00:16:44.424 "md_size": 0, 00:16:44.424 "dif_type": 0, 00:16:44.424 "dif_is_head_of_md": false, 00:16:44.424 "dif_pi_format": 0 00:16:44.424 } 00:16:44.424 }, 00:16:44.424 { 00:16:44.424 "method": "bdev_wait_for_examine" 00:16:44.424 } 00:16:44.424 ] 00:16:44.424 }, 00:16:44.424 { 00:16:44.424 "subsystem": "scsi", 00:16:44.424 "config": null 00:16:44.424 }, 00:16:44.424 { 00:16:44.424 "subsystem": "scheduler", 00:16:44.424 "config": [ 00:16:44.424 { 00:16:44.424 "method": "framework_set_scheduler", 00:16:44.424 "params": { 00:16:44.424 "name": "static" 00:16:44.424 } 00:16:44.424 } 00:16:44.424 ] 00:16:44.424 }, 00:16:44.424 { 00:16:44.424 "subsystem": "vhost_scsi", 00:16:44.424 "config": [] 00:16:44.424 }, 00:16:44.424 { 00:16:44.424 "subsystem": "vhost_blk", 00:16:44.424 "config": [] 00:16:44.424 }, 00:16:44.424 { 00:16:44.424 "subsystem": "ublk", 00:16:44.424 "config": [ 00:16:44.424 { 00:16:44.424 "method": "ublk_create_target", 00:16:44.424 "params": { 00:16:44.424 "cpumask": "1" 00:16:44.424 } 00:16:44.424 }, 00:16:44.424 { 00:16:44.424 "method": "ublk_start_disk", 00:16:44.424 "params": { 00:16:44.424 "bdev_name": "malloc0", 00:16:44.424 "ublk_id": 0, 00:16:44.424 "num_queues": 1, 00:16:44.424 "queue_depth": 128 00:16:44.424 } 00:16:44.424 } 00:16:44.424 ] 00:16:44.424 }, 00:16:44.424 { 00:16:44.424 "subsystem": "nbd", 00:16:44.424 "config": [] 00:16:44.424 }, 00:16:44.424 { 00:16:44.424 "subsystem": "nvmf", 00:16:44.424 "config": [ 00:16:44.424 { 00:16:44.424 "method": "nvmf_set_config", 00:16:44.424 "params": { 00:16:44.424 "discovery_filter": "match_any", 00:16:44.424 "admin_cmd_passthru": { 00:16:44.424 "identify_ctrlr": false 00:16:44.424 }, 00:16:44.424 "dhchap_digests": [ 00:16:44.424 "sha256", 00:16:44.424 "sha384", 00:16:44.424 "sha512" 00:16:44.424 ], 00:16:44.424 "dhchap_dhgroups": [ 00:16:44.424 "null", 00:16:44.424 "ffdhe2048", 00:16:44.424 "ffdhe3072", 00:16:44.424 "ffdhe4096", 00:16:44.424 "ffdhe6144", 00:16:44.424 "ffdhe8192" 00:16:44.424 ] 00:16:44.424 } 00:16:44.424 }, 00:16:44.424 { 00:16:44.424 "method": "nvmf_set_max_subsystems", 00:16:44.424 "params": { 00:16:44.424 "max_subsystems": 1024 00:16:44.424 } 00:16:44.424 }, 00:16:44.424 { 00:16:44.424 "method": "nvmf_set_crdt", 00:16:44.424 "params": { 00:16:44.424 "crdt1": 0, 00:16:44.424 "crdt2": 0, 00:16:44.424 "crdt3": 0 00:16:44.424 } 00:16:44.424 } 00:16:44.424 ] 00:16:44.424 }, 00:16:44.424 { 00:16:44.424 "subsystem": "iscsi", 00:16:44.424 "config": [ 00:16:44.424 { 00:16:44.424 "method": "iscsi_set_options", 00:16:44.424 "params": { 00:16:44.424 "node_base": "iqn.2016-06.io.spdk", 00:16:44.424 "max_sessions": 128, 00:16:44.424 "max_connections_per_session": 2, 00:16:44.424 "max_queue_depth": 64, 00:16:44.424 "default_time2wait": 2, 00:16:44.424 "default_time2retain": 20, 00:16:44.424 "first_burst_length": 8192, 00:16:44.424 "immediate_data": true, 00:16:44.424 "allow_duplicated_isid": false, 00:16:44.424 "error_recovery_level": 0, 00:16:44.424 "nop_timeout": 60, 00:16:44.424 "nop_in_interval": 30, 00:16:44.424 "disable_chap": false, 00:16:44.424 "require_chap": false, 00:16:44.424 "mutual_chap": false, 00:16:44.424 "chap_group": 0, 00:16:44.424 "max_large_datain_per_connection": 64, 00:16:44.424 "max_r2t_per_connection": 4, 00:16:44.424 "pdu_pool_size": 36864, 00:16:44.424 "immediate_data_pool_size": 16384, 00:16:44.424 "data_out_pool_size": 2048 00:16:44.424 } 00:16:44.424 } 00:16:44.424 ] 00:16:44.424 } 00:16:44.424 ] 00:16:44.424 }' 00:16:44.424 19:18:28 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75352 00:16:44.424 19:18:28 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75352 ']' 00:16:44.424 19:18:28 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75352 00:16:44.424 19:18:28 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:44.424 19:18:28 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.424 19:18:28 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75352 00:16:44.424 19:18:28 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:44.424 killing process with pid 75352 00:16:44.424 19:18:28 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:44.424 19:18:28 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75352' 00:16:44.424 19:18:28 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75352 00:16:44.424 19:18:28 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75352 00:16:45.367 [2024-12-16 19:18:29.661905] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:45.367 [2024-12-16 19:18:29.695220] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:45.367 [2024-12-16 19:18:29.695403] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:45.367 [2024-12-16 19:18:29.702869] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:45.367 [2024-12-16 19:18:29.702928] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:45.367 [2024-12-16 19:18:29.702942] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:45.367 [2024-12-16 19:18:29.702975] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:45.367 [2024-12-16 19:18:29.703131] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:46.805 19:18:31 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75412 00:16:46.805 19:18:31 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75412 00:16:46.805 19:18:31 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75412 ']' 00:16:46.805 19:18:31 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.805 19:18:31 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.805 19:18:31 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.805 19:18:31 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.805 19:18:31 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:46.805 19:18:31 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:46.805 19:18:31 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:46.805 "subsystems": [ 00:16:46.805 { 00:16:46.805 "subsystem": "fsdev", 00:16:46.805 "config": [ 00:16:46.805 { 00:16:46.805 "method": "fsdev_set_opts", 00:16:46.805 "params": { 00:16:46.805 "fsdev_io_pool_size": 65535, 00:16:46.805 "fsdev_io_cache_size": 256 00:16:46.805 } 00:16:46.805 } 00:16:46.805 ] 00:16:46.805 }, 00:16:46.805 { 00:16:46.805 "subsystem": "keyring", 00:16:46.805 "config": [] 00:16:46.805 }, 00:16:46.805 { 00:16:46.805 "subsystem": "iobuf", 00:16:46.805 "config": [ 00:16:46.805 { 00:16:46.805 "method": "iobuf_set_options", 00:16:46.805 "params": { 00:16:46.805 "small_pool_count": 8192, 00:16:46.805 "large_pool_count": 1024, 00:16:46.805 "small_bufsize": 8192, 00:16:46.805 "large_bufsize": 135168, 00:16:46.805 "enable_numa": false 00:16:46.805 } 00:16:46.805 } 00:16:46.805 ] 00:16:46.805 }, 00:16:46.805 { 00:16:46.805 "subsystem": "sock", 00:16:46.805 "config": [ 00:16:46.805 { 00:16:46.805 "method": "sock_set_default_impl", 00:16:46.805 "params": { 00:16:46.805 "impl_name": "posix" 00:16:46.805 } 00:16:46.805 }, 00:16:46.805 { 00:16:46.805 "method": "sock_impl_set_options", 00:16:46.805 "params": { 00:16:46.805 "impl_name": "ssl", 00:16:46.805 "recv_buf_size": 4096, 00:16:46.805 "send_buf_size": 4096, 00:16:46.805 "enable_recv_pipe": true, 00:16:46.805 "enable_quickack": false, 00:16:46.805 "enable_placement_id": 0, 00:16:46.805 "enable_zerocopy_send_server": true, 00:16:46.805 "enable_zerocopy_send_client": false, 00:16:46.805 "zerocopy_threshold": 0, 00:16:46.805 "tls_version": 0, 00:16:46.805 "enable_ktls": false 00:16:46.805 } 00:16:46.805 }, 00:16:46.805 { 00:16:46.805 "method": "sock_impl_set_options", 00:16:46.805 "params": { 00:16:46.805 "impl_name": "posix", 00:16:46.805 "recv_buf_size": 2097152, 00:16:46.805 "send_buf_size": 2097152, 00:16:46.805 "enable_recv_pipe": true, 00:16:46.805 "enable_quickack": false, 00:16:46.805 "enable_placement_id": 0, 00:16:46.805 "enable_zerocopy_send_server": true, 00:16:46.805 "enable_zerocopy_send_client": false, 00:16:46.805 "zerocopy_threshold": 0, 00:16:46.805 "tls_version": 0, 00:16:46.805 "enable_ktls": false 00:16:46.805 } 00:16:46.805 } 00:16:46.805 ] 00:16:46.805 }, 00:16:46.805 { 00:16:46.805 "subsystem": "vmd", 00:16:46.805 "config": [] 00:16:46.805 }, 00:16:46.805 { 00:16:46.805 "subsystem": "accel", 00:16:46.805 "config": [ 00:16:46.805 { 00:16:46.805 "method": "accel_set_options", 00:16:46.805 "params": { 00:16:46.805 "small_cache_size": 128, 00:16:46.805 "large_cache_size": 16, 00:16:46.805 "task_count": 2048, 00:16:46.805 "sequence_count": 2048, 00:16:46.805 "buf_count": 2048 00:16:46.805 } 00:16:46.805 } 00:16:46.805 ] 00:16:46.805 }, 00:16:46.805 { 00:16:46.805 "subsystem": "bdev", 00:16:46.805 "config": [ 00:16:46.805 { 00:16:46.805 "method": "bdev_set_options", 00:16:46.805 "params": { 00:16:46.805 "bdev_io_pool_size": 65535, 00:16:46.805 "bdev_io_cache_size": 256, 00:16:46.805 "bdev_auto_examine": true, 00:16:46.805 "iobuf_small_cache_size": 128, 00:16:46.805 "iobuf_large_cache_size": 16 00:16:46.805 } 00:16:46.805 }, 00:16:46.805 { 00:16:46.805 "method": "bdev_raid_set_options", 00:16:46.805 "params": { 00:16:46.805 "process_window_size_kb": 1024, 00:16:46.805 "process_max_bandwidth_mb_sec": 0 00:16:46.805 } 00:16:46.805 }, 00:16:46.805 { 00:16:46.805 "method": "bdev_iscsi_set_options", 00:16:46.805 "params": { 00:16:46.805 "timeout_sec": 30 00:16:46.805 } 00:16:46.805 }, 00:16:46.805 { 00:16:46.805 "method": "bdev_nvme_set_options", 00:16:46.805 "params": { 00:16:46.805 "action_on_timeout": "none", 00:16:46.805 "timeout_us": 0, 00:16:46.805 "timeout_admin_us": 0, 00:16:46.805 "keep_alive_timeout_ms": 10000, 00:16:46.805 "arbitration_burst": 0, 00:16:46.805 "low_priority_weight": 0, 00:16:46.805 "medium_priority_weight": 0, 00:16:46.805 "high_priority_weight": 0, 00:16:46.805 "nvme_adminq_poll_period_us": 10000, 00:16:46.805 "nvme_ioq_poll_period_us": 0, 00:16:46.805 "io_queue_requests": 0, 00:16:46.805 "delay_cmd_submit": true, 00:16:46.805 "transport_retry_count": 4, 00:16:46.805 "bdev_retry_count": 3, 00:16:46.805 "transport_ack_timeout": 0, 00:16:46.805 "ctrlr_loss_timeout_sec": 0, 00:16:46.805 "reconnect_delay_sec": 0, 00:16:46.805 "fast_io_fail_timeout_sec": 0, 00:16:46.805 "disable_auto_failback": false, 00:16:46.805 "generate_uuids": false, 00:16:46.805 "transport_tos": 0, 00:16:46.805 "nvme_error_stat": false, 00:16:46.805 "rdma_srq_size": 0, 00:16:46.805 "io_path_stat": false, 00:16:46.805 "allow_accel_sequence": false, 00:16:46.805 "rdma_max_cq_size": 0, 00:16:46.805 "rdma_cm_event_timeout_ms": 0, 00:16:46.805 "dhchap_digests": [ 00:16:46.805 "sha256", 00:16:46.805 "sha384", 00:16:46.805 "sha512" 00:16:46.805 ], 00:16:46.805 "dhchap_dhgroups": [ 00:16:46.805 "null", 00:16:46.805 "ffdhe2048", 00:16:46.805 "ffdhe3072", 00:16:46.805 "ffdhe4096", 00:16:46.805 "ffdhe6144", 00:16:46.805 "ffdhe8192" 00:16:46.805 ], 00:16:46.805 "rdma_umr_per_io": false 00:16:46.805 } 00:16:46.805 }, 00:16:46.805 { 00:16:46.805 "method": "bdev_nvme_set_hotplug", 00:16:46.805 "params": { 00:16:46.805 "period_us": 100000, 00:16:46.805 "enable": false 00:16:46.805 } 00:16:46.805 }, 00:16:46.805 { 00:16:46.805 "method": "bdev_malloc_create", 00:16:46.805 "params": { 00:16:46.805 "name": "malloc0", 00:16:46.805 "num_blocks": 8192, 00:16:46.805 "block_size": 4096, 00:16:46.805 "physical_block_size": 4096, 00:16:46.805 "uuid": "f795e15c-8792-491f-aeea-e886afca8849", 00:16:46.805 "optimal_io_boundary": 0, 00:16:46.805 "md_size": 0, 00:16:46.805 "dif_type": 0, 00:16:46.805 "dif_is_head_of_md": false, 00:16:46.805 "dif_pi_format": 0 00:16:46.805 } 00:16:46.805 }, 00:16:46.805 { 00:16:46.805 "method": "bdev_wait_for_examine" 00:16:46.805 } 00:16:46.805 ] 00:16:46.805 }, 00:16:46.805 { 00:16:46.805 "subsystem": "scsi", 00:16:46.805 "config": null 00:16:46.805 }, 00:16:46.805 { 00:16:46.805 "subsystem": "scheduler", 00:16:46.805 "config": [ 00:16:46.805 { 00:16:46.805 "method": "framework_set_scheduler", 00:16:46.805 "params": { 00:16:46.805 "name": "static" 00:16:46.805 } 00:16:46.805 } 00:16:46.805 ] 00:16:46.805 }, 00:16:46.805 { 00:16:46.805 "subsystem": "vhost_scsi", 00:16:46.805 "config": [] 00:16:46.805 }, 00:16:46.805 { 00:16:46.805 "subsystem": "vhost_blk", 00:16:46.805 "config": [] 00:16:46.805 }, 00:16:46.805 { 00:16:46.805 "subsystem": "ublk", 00:16:46.805 "config": [ 00:16:46.805 { 00:16:46.805 "method": "ublk_create_target", 00:16:46.805 "params": { 00:16:46.805 "cpumask": "1" 00:16:46.805 } 00:16:46.805 }, 00:16:46.805 { 00:16:46.805 "method": "ublk_start_disk", 00:16:46.805 "params": { 00:16:46.805 "bdev_name": "malloc0", 00:16:46.805 "ublk_id": 0, 00:16:46.805 "num_queues": 1, 00:16:46.805 "queue_depth": 128 00:16:46.805 } 00:16:46.805 } 00:16:46.805 ] 00:16:46.805 }, 00:16:46.805 { 00:16:46.805 "subsystem": "nbd", 00:16:46.805 "config": [] 00:16:46.805 }, 00:16:46.805 { 00:16:46.805 "subsystem": "nvmf", 00:16:46.805 "config": [ 00:16:46.805 { 00:16:46.805 "method": "nvmf_set_config", 00:16:46.805 "params": { 00:16:46.805 "discovery_filter": "match_any", 00:16:46.805 "admin_cmd_passthru": { 00:16:46.805 "identify_ctrlr": false 00:16:46.805 }, 00:16:46.805 "dhchap_digests": [ 00:16:46.805 "sha256", 00:16:46.805 "sha384", 00:16:46.805 "sha512" 00:16:46.805 ], 00:16:46.805 "dhchap_dhgroups": [ 00:16:46.805 "null", 00:16:46.805 "ffdhe2048", 00:16:46.805 "ffdhe3072", 00:16:46.805 "ffdhe4096", 00:16:46.805 "ffdhe6144", 00:16:46.805 "ffdhe8192" 00:16:46.805 ] 00:16:46.805 } 00:16:46.805 }, 00:16:46.805 { 00:16:46.806 "method": "nvmf_set_max_subsystems", 00:16:46.806 "params": { 00:16:46.806 "max_subsystems": 1024 00:16:46.806 } 00:16:46.806 }, 00:16:46.806 { 00:16:46.806 "method": "nvmf_set_crdt", 00:16:46.806 "params": { 00:16:46.806 "crdt1": 0, 00:16:46.806 "crdt2": 0, 00:16:46.806 "crdt3": 0 00:16:46.806 } 00:16:46.806 } 00:16:46.806 ] 00:16:46.806 }, 00:16:46.806 { 00:16:46.806 "subsystem": "iscsi", 00:16:46.806 "config": [ 00:16:46.806 { 00:16:46.806 "method": "iscsi_set_options", 00:16:46.806 "params": { 00:16:46.806 "node_base": "iqn.2016-06.io.spdk", 00:16:46.806 "max_sessions": 128, 00:16:46.806 "max_connections_per_session": 2, 00:16:46.806 "max_queue_depth": 64, 00:16:46.806 "default_time2wait": 2, 00:16:46.806 "default_time2retain": 20, 00:16:46.806 "first_burst_length": 8192, 00:16:46.806 "immediate_data": true, 00:16:46.806 "allow_duplicated_isid": false, 00:16:46.806 "error_recovery_level": 0, 00:16:46.806 "nop_timeout": 60, 00:16:46.806 "nop_in_interval": 30, 00:16:46.806 "disable_chap": false, 00:16:46.806 "require_chap": false, 00:16:46.806 "mutual_chap": false, 00:16:46.806 "chap_group": 0, 00:16:46.806 "max_large_datain_per_connection": 64, 00:16:46.806 "max_r2t_per_connection": 4, 00:16:46.806 "pdu_pool_size": 36864, 00:16:46.806 "immediate_data_pool_size": 16384, 00:16:46.806 "data_out_pool_size": 2048 00:16:46.806 } 00:16:46.806 } 00:16:46.806 ] 00:16:46.806 } 00:16:46.806 ] 00:16:46.806 }' 00:16:47.067 [2024-12-16 19:18:31.228132] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:47.067 [2024-12-16 19:18:31.228268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75412 ] 00:16:47.067 [2024-12-16 19:18:31.383065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.328 [2024-12-16 19:18:31.458668] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.900 [2024-12-16 19:18:32.100190] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:47.900 [2024-12-16 19:18:32.100820] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:47.900 [2024-12-16 19:18:32.107279] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:47.900 [2024-12-16 19:18:32.107336] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:47.900 [2024-12-16 19:18:32.107343] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:47.900 [2024-12-16 19:18:32.107349] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:47.900 [2024-12-16 19:18:32.116239] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:47.900 [2024-12-16 19:18:32.116254] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:47.900 [2024-12-16 19:18:32.123192] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:47.900 [2024-12-16 19:18:32.123262] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:47.900 [2024-12-16 19:18:32.140192] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:47.900 19:18:32 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.900 19:18:32 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:47.900 19:18:32 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:47.900 19:18:32 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:47.900 19:18:32 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.900 19:18:32 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:47.900 19:18:32 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.900 19:18:32 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:47.900 19:18:32 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:47.900 19:18:32 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75412 00:16:47.900 19:18:32 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75412 ']' 00:16:47.900 19:18:32 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75412 00:16:47.900 19:18:32 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:47.900 19:18:32 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.900 19:18:32 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75412 00:16:47.900 19:18:32 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.900 19:18:32 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.900 killing process with pid 75412 00:16:47.900 19:18:32 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75412' 00:16:47.900 19:18:32 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75412 00:16:47.900 19:18:32 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75412 00:16:49.286 [2024-12-16 19:18:33.225542] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:49.286 [2024-12-16 19:18:33.264204] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:49.286 [2024-12-16 19:18:33.264298] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:49.286 [2024-12-16 19:18:33.271805] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:49.286 [2024-12-16 19:18:33.271842] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:49.286 [2024-12-16 19:18:33.271848] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:49.286 [2024-12-16 19:18:33.271868] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:49.286 [2024-12-16 19:18:33.271976] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:50.228 19:18:34 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:50.228 00:16:50.228 real 0m7.409s 00:16:50.228 user 0m4.943s 00:16:50.228 sys 0m3.111s 00:16:50.228 19:18:34 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.228 19:18:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:50.228 ************************************ 00:16:50.228 END TEST test_save_ublk_config 00:16:50.228 ************************************ 00:16:50.228 19:18:34 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75481 00:16:50.228 19:18:34 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:50.228 19:18:34 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75481 00:16:50.228 19:18:34 ublk -- common/autotest_common.sh@835 -- # '[' -z 75481 ']' 00:16:50.228 19:18:34 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.228 19:18:34 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.228 19:18:34 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:50.228 19:18:34 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.228 19:18:34 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.228 19:18:34 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:50.228 [2024-12-16 19:18:34.550672] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:50.228 [2024-12-16 19:18:34.550759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75481 ] 00:16:50.489 [2024-12-16 19:18:34.697442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:50.489 [2024-12-16 19:18:34.775222] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.489 [2024-12-16 19:18:34.775233] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.061 19:18:35 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.061 19:18:35 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:51.061 19:18:35 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:51.061 19:18:35 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:51.061 19:18:35 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:51.061 19:18:35 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.061 ************************************ 00:16:51.061 START TEST test_create_ublk 00:16:51.061 ************************************ 00:16:51.061 19:18:35 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:51.061 19:18:35 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:51.061 19:18:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.061 19:18:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.061 [2024-12-16 19:18:35.412189] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:51.061 [2024-12-16 19:18:35.413728] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:51.322 19:18:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.322 19:18:35 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:51.322 19:18:35 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:51.322 19:18:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.322 19:18:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.322 19:18:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.322 19:18:35 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:51.322 19:18:35 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:51.322 19:18:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.322 19:18:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.322 [2024-12-16 19:18:35.575299] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:51.322 [2024-12-16 19:18:35.575600] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:51.322 [2024-12-16 19:18:35.575615] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:51.322 [2024-12-16 19:18:35.575620] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:51.322 [2024-12-16 19:18:35.583205] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:51.322 [2024-12-16 19:18:35.583223] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:51.322 [2024-12-16 19:18:35.591192] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:51.322 [2024-12-16 19:18:35.591669] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:51.322 [2024-12-16 19:18:35.613199] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:51.322 19:18:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.322 19:18:35 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:51.322 19:18:35 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:51.322 19:18:35 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:51.322 19:18:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.322 19:18:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.322 19:18:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.322 19:18:35 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:51.322 { 00:16:51.322 "ublk_device": "/dev/ublkb0", 00:16:51.322 "id": 0, 00:16:51.322 "queue_depth": 512, 00:16:51.322 "num_queues": 4, 00:16:51.322 "bdev_name": "Malloc0" 00:16:51.322 } 00:16:51.322 ]' 00:16:51.322 19:18:35 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:51.322 19:18:35 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:51.322 19:18:35 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:51.583 19:18:35 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:51.583 19:18:35 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:51.583 19:18:35 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:51.583 19:18:35 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:51.583 19:18:35 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:51.583 19:18:35 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:51.583 19:18:35 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:51.583 19:18:35 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:51.583 19:18:35 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:51.583 19:18:35 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:51.583 19:18:35 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:51.583 19:18:35 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:51.583 19:18:35 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:51.583 19:18:35 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:51.583 19:18:35 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:51.583 19:18:35 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:51.583 19:18:35 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:51.583 19:18:35 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:51.583 19:18:35 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:51.583 fio: verification read phase will never start because write phase uses all of runtime 00:16:51.583 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:51.583 fio-3.35 00:16:51.583 Starting 1 process 00:17:03.812 00:17:03.812 fio_test: (groupid=0, jobs=1): err= 0: pid=75526: Mon Dec 16 19:18:46 2024 00:17:03.812 write: IOPS=16.4k, BW=64.0MiB/s (67.1MB/s)(640MiB/10001msec); 0 zone resets 00:17:03.812 clat (usec): min=35, max=8013, avg=60.21, stdev=123.90 00:17:03.812 lat (usec): min=35, max=8013, avg=60.67, stdev=123.91 00:17:03.812 clat percentiles (usec): 00:17:03.812 | 1.00th=[ 40], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 46], 00:17:03.812 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 52], 60.00th=[ 58], 00:17:03.812 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 69], 95.00th=[ 74], 00:17:03.812 | 99.00th=[ 88], 99.50th=[ 104], 99.90th=[ 2802], 99.95th=[ 3359], 00:17:03.812 | 99.99th=[ 3982] 00:17:03.812 bw ( KiB/s): min=34192, max=79464, per=100.00%, avg=65905.16, stdev=11956.45, samples=19 00:17:03.812 iops : min= 8548, max=19866, avg=16476.26, stdev=2989.12, samples=19 00:17:03.812 lat (usec) : 50=43.78%, 100=55.68%, 250=0.31%, 500=0.02%, 750=0.01% 00:17:03.812 lat (usec) : 1000=0.01% 00:17:03.812 lat (msec) : 2=0.05%, 4=0.13%, 10=0.01% 00:17:03.812 cpu : usr=2.55%, sys=13.33%, ctx=163905, majf=0, minf=795 00:17:03.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:03.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.812 issued rwts: total=0,163905,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:03.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:03.812 00:17:03.812 Run status group 0 (all jobs): 00:17:03.812 WRITE: bw=64.0MiB/s (67.1MB/s), 64.0MiB/s-64.0MiB/s (67.1MB/s-67.1MB/s), io=640MiB (671MB), run=10001-10001msec 00:17:03.812 00:17:03.812 Disk stats (read/write): 00:17:03.812 ublkb0: ios=0/162352, merge=0/0, ticks=0/8320, in_queue=8320, util=98.68% 00:17:03.812 19:18:46 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.812 [2024-12-16 19:18:46.032190] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:03.812 [2024-12-16 19:18:46.067712] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:03.812 [2024-12-16 19:18:46.068650] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:03.812 [2024-12-16 19:18:46.077212] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:03.812 [2024-12-16 19:18:46.077458] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:03.812 [2024-12-16 19:18:46.077473] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.812 19:18:46 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.812 [2024-12-16 19:18:46.094261] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:17:03.812 request: 00:17:03.812 { 00:17:03.812 "ublk_id": 0, 00:17:03.812 "method": "ublk_stop_disk", 00:17:03.812 "req_id": 1 00:17:03.812 } 00:17:03.812 Got JSON-RPC error response 00:17:03.812 response: 00:17:03.812 { 00:17:03.812 "code": -19, 00:17:03.812 "message": "No such device" 00:17:03.812 } 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:03.812 19:18:46 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.812 [2024-12-16 19:18:46.109252] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:03.812 [2024-12-16 19:18:46.117189] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:03.812 [2024-12-16 19:18:46.117219] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.812 19:18:46 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.812 19:18:46 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:17:03.812 19:18:46 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.812 19:18:46 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:03.812 19:18:46 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:17:03.812 19:18:46 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:03.812 19:18:46 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.812 19:18:46 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:03.812 19:18:46 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:17:03.812 19:18:46 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:03.812 00:17:03.812 real 0m11.150s 00:17:03.812 user 0m0.554s 00:17:03.812 sys 0m1.403s 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.812 19:18:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.812 ************************************ 00:17:03.812 END TEST test_create_ublk 00:17:03.812 ************************************ 00:17:03.812 19:18:46 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:17:03.812 19:18:46 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:03.812 19:18:46 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.812 19:18:46 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.812 ************************************ 00:17:03.812 START TEST test_create_multi_ublk 00:17:03.812 ************************************ 00:17:03.812 19:18:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:17:03.812 19:18:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:17:03.812 19:18:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.812 19:18:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.812 [2024-12-16 19:18:46.608182] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:03.812 [2024-12-16 19:18:46.609715] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:03.812 19:18:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.812 19:18:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:17:03.812 19:18:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:17:03.812 19:18:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:03.812 19:18:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:17:03.812 19:18:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.812 19:18:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.812 19:18:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.812 19:18:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:17:03.812 19:18:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:03.812 19:18:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.812 19:18:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.812 [2024-12-16 19:18:46.812292] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:03.812 [2024-12-16 19:18:46.812587] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:03.813 [2024-12-16 19:18:46.812599] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:03.813 [2024-12-16 19:18:46.812607] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:03.813 [2024-12-16 19:18:46.824237] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:03.813 [2024-12-16 19:18:46.824251] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:03.813 [2024-12-16 19:18:46.836194] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:03.813 [2024-12-16 19:18:46.836671] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:03.813 [2024-12-16 19:18:46.862195] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:03.813 19:18:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.813 19:18:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:17:03.813 19:18:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:03.813 19:18:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:17:03.813 19:18:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.813 19:18:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.813 [2024-12-16 19:18:47.090290] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:17:03.813 [2024-12-16 19:18:47.090605] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:17:03.813 [2024-12-16 19:18:47.090619] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:03.813 [2024-12-16 19:18:47.090623] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:03.813 [2024-12-16 19:18:47.099352] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:03.813 [2024-12-16 19:18:47.099363] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:03.813 [2024-12-16 19:18:47.106201] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:03.813 [2024-12-16 19:18:47.106696] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:03.813 [2024-12-16 19:18:47.115210] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.813 [2024-12-16 19:18:47.274273] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:17:03.813 [2024-12-16 19:18:47.274580] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:17:03.813 [2024-12-16 19:18:47.274594] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:17:03.813 [2024-12-16 19:18:47.274600] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:17:03.813 [2024-12-16 19:18:47.282208] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:03.813 [2024-12-16 19:18:47.282227] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:03.813 [2024-12-16 19:18:47.290191] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:03.813 [2024-12-16 19:18:47.290690] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:17:03.813 [2024-12-16 19:18:47.299234] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.813 [2024-12-16 19:18:47.458295] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:17:03.813 [2024-12-16 19:18:47.458600] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:17:03.813 [2024-12-16 19:18:47.458613] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:17:03.813 [2024-12-16 19:18:47.458618] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:17:03.813 [2024-12-16 19:18:47.466201] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:03.813 [2024-12-16 19:18:47.466217] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:03.813 [2024-12-16 19:18:47.474207] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:03.813 [2024-12-16 19:18:47.474701] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:17:03.813 [2024-12-16 19:18:47.478077] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:17:03.813 { 00:17:03.813 "ublk_device": "/dev/ublkb0", 00:17:03.813 "id": 0, 00:17:03.813 "queue_depth": 512, 00:17:03.813 "num_queues": 4, 00:17:03.813 "bdev_name": "Malloc0" 00:17:03.813 }, 00:17:03.813 { 00:17:03.813 "ublk_device": "/dev/ublkb1", 00:17:03.813 "id": 1, 00:17:03.813 "queue_depth": 512, 00:17:03.813 "num_queues": 4, 00:17:03.813 "bdev_name": "Malloc1" 00:17:03.813 }, 00:17:03.813 { 00:17:03.813 "ublk_device": "/dev/ublkb2", 00:17:03.813 "id": 2, 00:17:03.813 "queue_depth": 512, 00:17:03.813 "num_queues": 4, 00:17:03.813 "bdev_name": "Malloc2" 00:17:03.813 }, 00:17:03.813 { 00:17:03.813 "ublk_device": "/dev/ublkb3", 00:17:03.813 "id": 3, 00:17:03.813 "queue_depth": 512, 00:17:03.813 "num_queues": 4, 00:17:03.813 "bdev_name": "Malloc3" 00:17:03.813 } 00:17:03.813 ]' 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:17:03.813 19:18:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:17:03.814 19:18:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:17:03.814 19:18:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:17:03.814 19:18:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:03.814 19:18:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:17:03.814 19:18:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:03.814 19:18:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:17:03.814 19:18:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:17:03.814 19:18:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:17:03.814 19:18:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:17:03.814 19:18:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:03.814 19:18:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:17:03.814 19:18:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.814 19:18:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.814 [2024-12-16 19:18:48.118258] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:03.814 [2024-12-16 19:18:48.153238] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:03.814 [2024-12-16 19:18:48.154138] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:03.814 [2024-12-16 19:18:48.161211] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:03.814 [2024-12-16 19:18:48.161460] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:03.814 [2024-12-16 19:18:48.161474] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:04.072 19:18:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.072 19:18:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:04.072 19:18:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:17:04.072 19:18:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.072 19:18:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.072 [2024-12-16 19:18:48.177267] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:04.072 [2024-12-16 19:18:48.211231] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:04.072 [2024-12-16 19:18:48.212051] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:04.072 [2024-12-16 19:18:48.220224] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:04.072 [2024-12-16 19:18:48.220458] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:04.072 [2024-12-16 19:18:48.220470] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:04.072 19:18:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.072 19:18:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:04.072 19:18:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:17:04.072 19:18:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.072 19:18:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.072 [2024-12-16 19:18:48.235268] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:17:04.072 [2024-12-16 19:18:48.267231] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:04.072 [2024-12-16 19:18:48.268005] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:17:04.072 [2024-12-16 19:18:48.275206] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:04.072 [2024-12-16 19:18:48.275445] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:17:04.072 [2024-12-16 19:18:48.275458] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:17:04.072 19:18:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.072 19:18:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:04.072 19:18:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:17:04.072 19:18:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.072 19:18:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.072 [2024-12-16 19:18:48.291256] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:17:04.072 [2024-12-16 19:18:48.328728] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:04.072 [2024-12-16 19:18:48.329648] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:17:04.072 [2024-12-16 19:18:48.334194] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:04.072 [2024-12-16 19:18:48.334434] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:17:04.072 [2024-12-16 19:18:48.334446] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:17:04.072 19:18:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.072 19:18:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:17:04.331 [2024-12-16 19:18:48.534242] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:04.331 [2024-12-16 19:18:48.542187] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:04.331 [2024-12-16 19:18:48.542213] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:04.331 19:18:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:17:04.331 19:18:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:04.331 19:18:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:04.331 19:18:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.331 19:18:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.589 19:18:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.589 19:18:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:04.589 19:18:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:04.589 19:18:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.589 19:18:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.155 19:18:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.155 19:18:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.155 19:18:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:05.155 19:18:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.155 19:18:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.155 19:18:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.155 19:18:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.155 19:18:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:05.155 19:18:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.155 19:18:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.413 19:18:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.413 19:18:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:17:05.413 19:18:49 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:05.413 19:18:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.413 19:18:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.413 19:18:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.413 19:18:49 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:05.413 19:18:49 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:17:05.413 19:18:49 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:05.413 19:18:49 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:05.413 19:18:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.413 19:18:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.413 19:18:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.413 19:18:49 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:05.413 19:18:49 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:05.413 19:18:49 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:05.413 00:17:05.413 real 0m3.135s 00:17:05.413 user 0m0.801s 00:17:05.413 sys 0m0.132s 00:17:05.413 ************************************ 00:17:05.413 END TEST test_create_multi_ublk 00:17:05.413 ************************************ 00:17:05.413 19:18:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.413 19:18:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.413 19:18:49 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:05.413 19:18:49 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:05.413 19:18:49 ublk -- ublk/ublk.sh@130 -- # killprocess 75481 00:17:05.413 19:18:49 ublk -- common/autotest_common.sh@954 -- # '[' -z 75481 ']' 00:17:05.413 19:18:49 ublk -- common/autotest_common.sh@958 -- # kill -0 75481 00:17:05.413 19:18:49 ublk -- common/autotest_common.sh@959 -- # uname 00:17:05.413 19:18:49 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.413 19:18:49 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75481 00:17:05.671 19:18:49 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:05.671 19:18:49 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:05.671 killing process with pid 75481 00:17:05.671 19:18:49 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75481' 00:17:05.671 19:18:49 ublk -- common/autotest_common.sh@973 -- # kill 75481 00:17:05.671 19:18:49 ublk -- common/autotest_common.sh@978 -- # wait 75481 00:17:06.237 [2024-12-16 19:18:50.308610] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:06.237 [2024-12-16 19:18:50.308662] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:06.805 00:17:06.805 real 0m24.118s 00:17:06.805 user 0m34.657s 00:17:06.805 sys 0m9.200s 00:17:06.805 19:18:50 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.805 ************************************ 00:17:06.805 19:18:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.805 END TEST ublk 00:17:06.805 ************************************ 00:17:06.805 19:18:50 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:06.805 19:18:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:06.805 19:18:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.805 19:18:50 -- common/autotest_common.sh@10 -- # set +x 00:17:06.805 ************************************ 00:17:06.805 START TEST ublk_recovery 00:17:06.805 ************************************ 00:17:06.805 19:18:51 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:06.805 * Looking for test storage... 00:17:06.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:06.805 19:18:51 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:06.805 19:18:51 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:17:06.805 19:18:51 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:06.805 19:18:51 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:06.805 19:18:51 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:17:06.805 19:18:51 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:06.805 19:18:51 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:06.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.805 --rc genhtml_branch_coverage=1 00:17:06.805 --rc genhtml_function_coverage=1 00:17:06.805 --rc genhtml_legend=1 00:17:06.805 --rc geninfo_all_blocks=1 00:17:06.805 --rc geninfo_unexecuted_blocks=1 00:17:06.805 00:17:06.805 ' 00:17:06.805 19:18:51 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:06.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.805 --rc genhtml_branch_coverage=1 00:17:06.805 --rc genhtml_function_coverage=1 00:17:06.805 --rc genhtml_legend=1 00:17:06.805 --rc geninfo_all_blocks=1 00:17:06.805 --rc geninfo_unexecuted_blocks=1 00:17:06.805 00:17:06.805 ' 00:17:06.805 19:18:51 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:06.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.805 --rc genhtml_branch_coverage=1 00:17:06.805 --rc genhtml_function_coverage=1 00:17:06.805 --rc genhtml_legend=1 00:17:06.805 --rc geninfo_all_blocks=1 00:17:06.805 --rc geninfo_unexecuted_blocks=1 00:17:06.805 00:17:06.805 ' 00:17:06.805 19:18:51 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:06.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.805 --rc genhtml_branch_coverage=1 00:17:06.805 --rc genhtml_function_coverage=1 00:17:06.805 --rc genhtml_legend=1 00:17:06.805 --rc geninfo_all_blocks=1 00:17:06.805 --rc geninfo_unexecuted_blocks=1 00:17:06.805 00:17:06.805 ' 00:17:06.805 19:18:51 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:06.805 19:18:51 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:06.805 19:18:51 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:06.805 19:18:51 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:06.805 19:18:51 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:06.805 19:18:51 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:06.805 19:18:51 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:06.806 19:18:51 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:06.806 19:18:51 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:06.806 19:18:51 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:06.806 19:18:51 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75871 00:17:06.806 19:18:51 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:06.806 19:18:51 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75871 00:17:06.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.806 19:18:51 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75871 ']' 00:17:06.806 19:18:51 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:06.806 19:18:51 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.806 19:18:51 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.806 19:18:51 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.806 19:18:51 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.806 19:18:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:07.065 [2024-12-16 19:18:51.220755] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:07.065 [2024-12-16 19:18:51.220876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75871 ] 00:17:07.065 [2024-12-16 19:18:51.367497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:07.324 [2024-12-16 19:18:51.442680] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.324 [2024-12-16 19:18:51.442780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.890 19:18:52 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.890 19:18:52 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:07.890 19:18:52 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:07.890 19:18:52 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.890 19:18:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:07.890 [2024-12-16 19:18:52.018189] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:07.890 [2024-12-16 19:18:52.019703] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:07.890 19:18:52 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.890 19:18:52 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:07.890 19:18:52 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.890 19:18:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:07.890 malloc0 00:17:07.890 19:18:52 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.890 19:18:52 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:07.890 19:18:52 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.890 19:18:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:07.890 [2024-12-16 19:18:52.098302] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:07.890 [2024-12-16 19:18:52.098388] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:07.890 [2024-12-16 19:18:52.098396] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:07.890 [2024-12-16 19:18:52.098402] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:07.890 [2024-12-16 19:18:52.107292] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:07.890 [2024-12-16 19:18:52.107309] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:07.890 [2024-12-16 19:18:52.114196] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:07.890 [2024-12-16 19:18:52.114304] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:07.890 [2024-12-16 19:18:52.129211] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:07.890 1 00:17:07.890 19:18:52 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.890 19:18:52 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:08.824 19:18:53 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75906 00:17:08.824 19:18:53 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:08.824 19:18:53 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:09.082 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:09.082 fio-3.35 00:17:09.082 Starting 1 process 00:17:14.352 19:18:58 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75871 00:17:14.352 19:18:58 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:19.641 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75871 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:19.641 19:19:03 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76016 00:17:19.641 19:19:03 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:19.642 19:19:03 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:19.642 19:19:03 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76016 00:17:19.642 19:19:03 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76016 ']' 00:17:19.642 19:19:03 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.642 19:19:03 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.642 19:19:03 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.642 19:19:03 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.642 19:19:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.642 [2024-12-16 19:19:03.236081] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:19.642 [2024-12-16 19:19:03.236235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76016 ] 00:17:19.642 [2024-12-16 19:19:03.401427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:19.642 [2024-12-16 19:19:03.498447] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.642 [2024-12-16 19:19:03.498450] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.903 19:19:04 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:19.903 19:19:04 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:19.903 19:19:04 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:19.903 19:19:04 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.903 19:19:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.903 [2024-12-16 19:19:04.183204] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:19.903 [2024-12-16 19:19:04.185509] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:19.903 19:19:04 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.903 19:19:04 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:19.903 19:19:04 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.903 19:19:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.216 malloc0 00:17:20.216 19:19:04 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.216 19:19:04 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:20.216 19:19:04 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.216 19:19:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.216 [2024-12-16 19:19:04.311387] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:20.216 [2024-12-16 19:19:04.311436] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:20.216 [2024-12-16 19:19:04.311448] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:20.216 [2024-12-16 19:19:04.319266] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:20.216 [2024-12-16 19:19:04.319295] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:17:20.216 [2024-12-16 19:19:04.319304] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:20.216 [2024-12-16 19:19:04.319408] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:20.216 1 00:17:20.216 19:19:04 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.216 19:19:04 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75906 00:17:20.216 [2024-12-16 19:19:04.327207] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:20.216 [2024-12-16 19:19:04.334213] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:20.216 [2024-12-16 19:19:04.343529] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:20.216 [2024-12-16 19:19:04.343564] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:16.479 00:18:16.479 fio_test: (groupid=0, jobs=1): err= 0: pid=75909: Mon Dec 16 19:19:53 2024 00:18:16.479 read: IOPS=26.5k, BW=103MiB/s (108MB/s)(6206MiB/60001msec) 00:18:16.479 slat (nsec): min=868, max=808222, avg=4915.75, stdev=1695.46 00:18:16.479 clat (usec): min=937, max=6209.7k, avg=2382.18, stdev=40005.05 00:18:16.479 lat (usec): min=941, max=6209.7k, avg=2387.10, stdev=40005.05 00:18:16.479 clat percentiles (usec): 00:18:16.479 | 1.00th=[ 1762], 5.00th=[ 1876], 10.00th=[ 1909], 20.00th=[ 1926], 00:18:16.479 | 30.00th=[ 1958], 40.00th=[ 1958], 50.00th=[ 1975], 60.00th=[ 1991], 00:18:16.479 | 70.00th=[ 2008], 80.00th=[ 2040], 90.00th=[ 2442], 95.00th=[ 2900], 00:18:16.479 | 99.00th=[ 4948], 99.50th=[ 5866], 99.90th=[ 7373], 99.95th=[ 8979], 00:18:16.479 | 99.99th=[13566] 00:18:16.479 bw ( KiB/s): min= 528, max=124808, per=100.00%, avg=116635.41, stdev=17817.94, samples=108 00:18:16.479 iops : min= 132, max=31202, avg=29158.85, stdev=4454.49, samples=108 00:18:16.479 write: IOPS=26.5k, BW=103MiB/s (108MB/s)(6200MiB/60001msec); 0 zone resets 00:18:16.479 slat (nsec): min=982, max=450934, avg=4946.24, stdev=1598.51 00:18:16.479 clat (usec): min=722, max=6210.1k, avg=2443.33, stdev=38795.22 00:18:16.479 lat (usec): min=728, max=6210.1k, avg=2448.28, stdev=38795.22 00:18:16.479 clat percentiles (usec): 00:18:16.479 | 1.00th=[ 1795], 5.00th=[ 1958], 10.00th=[ 1991], 20.00th=[ 2024], 00:18:16.479 | 30.00th=[ 2040], 40.00th=[ 2057], 50.00th=[ 2073], 60.00th=[ 2089], 00:18:16.479 | 70.00th=[ 2114], 80.00th=[ 2147], 90.00th=[ 2507], 95.00th=[ 2802], 00:18:16.479 | 99.00th=[ 4948], 99.50th=[ 5932], 99.90th=[ 7439], 99.95th=[ 8717], 00:18:16.479 | 99.99th=[13698] 00:18:16.479 bw ( KiB/s): min= 496, max=124328, per=100.00%, avg=116506.59, stdev=17875.99, samples=108 00:18:16.479 iops : min= 124, max=31082, avg=29126.65, stdev=4469.00, samples=108 00:18:16.479 lat (usec) : 750=0.01%, 1000=0.01% 00:18:16.479 lat (msec) : 2=37.47%, 4=60.01%, 10=2.48%, 20=0.04%, >=2000=0.01% 00:18:16.479 cpu : usr=5.67%, sys=26.70%, ctx=106824, majf=0, minf=13 00:18:16.479 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:16.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:16.479 issued rwts: total=1588826,1587166,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.479 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:16.479 00:18:16.479 Run status group 0 (all jobs): 00:18:16.479 READ: bw=103MiB/s (108MB/s), 103MiB/s-103MiB/s (108MB/s-108MB/s), io=6206MiB (6508MB), run=60001-60001msec 00:18:16.479 WRITE: bw=103MiB/s (108MB/s), 103MiB/s-103MiB/s (108MB/s-108MB/s), io=6200MiB (6501MB), run=60001-60001msec 00:18:16.479 00:18:16.479 Disk stats (read/write): 00:18:16.479 ublkb1: ios=1585481/1583896, merge=0/0, ticks=3695169/3659684, in_queue=7354854, util=99.89% 00:18:16.479 19:19:53 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:16.479 19:19:53 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.479 19:19:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:16.479 [2024-12-16 19:19:53.390720] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:16.479 [2024-12-16 19:19:53.433207] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:16.479 [2024-12-16 19:19:53.433374] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:16.479 [2024-12-16 19:19:53.449200] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:16.479 [2024-12-16 19:19:53.449298] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:16.479 [2024-12-16 19:19:53.449307] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:16.479 19:19:53 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.479 19:19:53 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:16.479 19:19:53 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.479 19:19:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:16.479 [2024-12-16 19:19:53.465262] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:16.479 [2024-12-16 19:19:53.473189] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:16.479 [2024-12-16 19:19:53.473217] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:16.479 19:19:53 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.479 19:19:53 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:16.479 19:19:53 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:16.479 19:19:53 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76016 00:18:16.479 19:19:53 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76016 ']' 00:18:16.479 19:19:53 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76016 00:18:16.479 19:19:53 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:18:16.479 19:19:53 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:16.479 19:19:53 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76016 00:18:16.479 killing process with pid 76016 00:18:16.479 19:19:53 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:16.479 19:19:53 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:16.479 19:19:53 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76016' 00:18:16.479 19:19:53 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76016 00:18:16.479 19:19:53 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76016 00:18:16.479 [2024-12-16 19:19:54.602355] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:16.479 [2024-12-16 19:19:54.602406] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:16.479 ************************************ 00:18:16.479 END TEST ublk_recovery 00:18:16.479 ************************************ 00:18:16.479 00:18:16.479 real 1m4.652s 00:18:16.479 user 1m47.240s 00:18:16.479 sys 0m30.685s 00:18:16.479 19:19:55 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:16.479 19:19:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:16.479 19:19:55 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:18:16.479 19:19:55 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:16.479 19:19:55 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:16.479 19:19:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:16.479 19:19:55 -- common/autotest_common.sh@10 -- # set +x 00:18:16.479 19:19:55 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:16.479 19:19:55 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:16.479 19:19:55 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:16.479 19:19:55 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:16.479 19:19:55 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:16.479 19:19:55 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:16.479 19:19:55 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:16.479 19:19:55 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:16.479 19:19:55 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:16.479 19:19:55 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:18:16.479 19:19:55 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:16.480 19:19:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:16.480 19:19:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:16.480 19:19:55 -- common/autotest_common.sh@10 -- # set +x 00:18:16.480 ************************************ 00:18:16.480 START TEST ftl 00:18:16.480 ************************************ 00:18:16.480 19:19:55 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:16.480 * Looking for test storage... 00:18:16.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:16.480 19:19:55 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:16.480 19:19:55 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:18:16.480 19:19:55 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:16.480 19:19:55 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:16.480 19:19:55 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:16.480 19:19:55 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:16.480 19:19:55 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:16.480 19:19:55 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:18:16.480 19:19:55 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:18:16.480 19:19:55 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:18:16.480 19:19:55 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:18:16.480 19:19:55 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:18:16.480 19:19:55 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:18:16.480 19:19:55 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:18:16.480 19:19:55 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:16.480 19:19:55 ftl -- scripts/common.sh@344 -- # case "$op" in 00:18:16.480 19:19:55 ftl -- scripts/common.sh@345 -- # : 1 00:18:16.480 19:19:55 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:16.480 19:19:55 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:16.480 19:19:55 ftl -- scripts/common.sh@365 -- # decimal 1 00:18:16.480 19:19:55 ftl -- scripts/common.sh@353 -- # local d=1 00:18:16.480 19:19:55 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:16.480 19:19:55 ftl -- scripts/common.sh@355 -- # echo 1 00:18:16.480 19:19:55 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:18:16.480 19:19:55 ftl -- scripts/common.sh@366 -- # decimal 2 00:18:16.480 19:19:55 ftl -- scripts/common.sh@353 -- # local d=2 00:18:16.480 19:19:55 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:16.480 19:19:55 ftl -- scripts/common.sh@355 -- # echo 2 00:18:16.480 19:19:55 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:18:16.480 19:19:55 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:16.480 19:19:55 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:16.480 19:19:55 ftl -- scripts/common.sh@368 -- # return 0 00:18:16.480 19:19:55 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:16.480 19:19:55 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:16.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.480 --rc genhtml_branch_coverage=1 00:18:16.480 --rc genhtml_function_coverage=1 00:18:16.480 --rc genhtml_legend=1 00:18:16.480 --rc geninfo_all_blocks=1 00:18:16.480 --rc geninfo_unexecuted_blocks=1 00:18:16.480 00:18:16.480 ' 00:18:16.480 19:19:55 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:16.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.480 --rc genhtml_branch_coverage=1 00:18:16.480 --rc genhtml_function_coverage=1 00:18:16.480 --rc genhtml_legend=1 00:18:16.480 --rc geninfo_all_blocks=1 00:18:16.480 --rc geninfo_unexecuted_blocks=1 00:18:16.480 00:18:16.480 ' 00:18:16.480 19:19:55 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:16.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.480 --rc genhtml_branch_coverage=1 00:18:16.480 --rc genhtml_function_coverage=1 00:18:16.480 --rc genhtml_legend=1 00:18:16.480 --rc geninfo_all_blocks=1 00:18:16.480 --rc geninfo_unexecuted_blocks=1 00:18:16.480 00:18:16.480 ' 00:18:16.480 19:19:55 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:16.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.480 --rc genhtml_branch_coverage=1 00:18:16.480 --rc genhtml_function_coverage=1 00:18:16.480 --rc genhtml_legend=1 00:18:16.480 --rc geninfo_all_blocks=1 00:18:16.480 --rc geninfo_unexecuted_blocks=1 00:18:16.480 00:18:16.480 ' 00:18:16.480 19:19:55 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:16.480 19:19:55 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:16.480 19:19:55 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:16.480 19:19:55 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:16.480 19:19:55 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:16.480 19:19:55 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:16.480 19:19:55 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:16.480 19:19:55 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:16.480 19:19:55 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:16.480 19:19:55 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:16.480 19:19:55 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:16.480 19:19:55 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:16.480 19:19:55 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:16.480 19:19:55 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:16.480 19:19:55 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:16.480 19:19:55 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:16.480 19:19:55 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:16.480 19:19:55 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:16.480 19:19:55 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:16.480 19:19:55 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:16.480 19:19:55 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:16.480 19:19:55 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:16.480 19:19:55 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:16.480 19:19:55 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:16.480 19:19:55 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:16.480 19:19:55 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:16.480 19:19:55 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:16.480 19:19:55 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:16.480 19:19:55 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:16.480 19:19:55 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:16.480 19:19:55 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:16.480 19:19:55 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:16.480 19:19:55 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:16.480 19:19:55 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:16.480 19:19:55 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:16.480 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:16.480 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:16.480 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:16.480 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:16.480 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:16.480 19:19:56 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76822 00:18:16.480 19:19:56 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76822 00:18:16.480 19:19:56 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:16.480 19:19:56 ftl -- common/autotest_common.sh@835 -- # '[' -z 76822 ']' 00:18:16.480 19:19:56 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.480 19:19:56 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.480 19:19:56 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.480 19:19:56 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.480 19:19:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:16.480 [2024-12-16 19:19:56.489616] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:16.480 [2024-12-16 19:19:56.489922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76822 ] 00:18:16.480 [2024-12-16 19:19:56.643306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.480 [2024-12-16 19:19:56.751035] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.480 19:19:57 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.480 19:19:57 ftl -- common/autotest_common.sh@868 -- # return 0 00:18:16.480 19:19:57 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:16.480 19:19:57 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:16.480 19:19:58 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:16.480 19:19:58 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:16.480 19:19:58 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:16.480 19:19:58 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:16.480 19:19:58 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:16.480 19:19:59 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:16.480 19:19:59 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:16.480 19:19:59 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:16.480 19:19:59 ftl -- ftl/ftl.sh@50 -- # break 00:18:16.480 19:19:59 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:16.480 19:19:59 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:16.480 19:19:59 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:16.480 19:19:59 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:16.480 19:19:59 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:16.480 19:19:59 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:16.480 19:19:59 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:16.480 19:19:59 ftl -- ftl/ftl.sh@63 -- # break 00:18:16.481 19:19:59 ftl -- ftl/ftl.sh@66 -- # killprocess 76822 00:18:16.481 19:19:59 ftl -- common/autotest_common.sh@954 -- # '[' -z 76822 ']' 00:18:16.481 19:19:59 ftl -- common/autotest_common.sh@958 -- # kill -0 76822 00:18:16.481 19:19:59 ftl -- common/autotest_common.sh@959 -- # uname 00:18:16.481 19:19:59 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:16.481 19:19:59 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76822 00:18:16.481 killing process with pid 76822 00:18:16.481 19:19:59 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:16.481 19:19:59 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:16.481 19:19:59 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76822' 00:18:16.481 19:19:59 ftl -- common/autotest_common.sh@973 -- # kill 76822 00:18:16.481 19:19:59 ftl -- common/autotest_common.sh@978 -- # wait 76822 00:18:16.481 19:20:00 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:16.481 19:20:00 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:16.481 19:20:00 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:16.481 19:20:00 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:16.481 19:20:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:16.481 ************************************ 00:18:16.481 START TEST ftl_fio_basic 00:18:16.481 ************************************ 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:16.481 * Looking for test storage... 00:18:16.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:16.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.481 --rc genhtml_branch_coverage=1 00:18:16.481 --rc genhtml_function_coverage=1 00:18:16.481 --rc genhtml_legend=1 00:18:16.481 --rc geninfo_all_blocks=1 00:18:16.481 --rc geninfo_unexecuted_blocks=1 00:18:16.481 00:18:16.481 ' 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:16.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.481 --rc genhtml_branch_coverage=1 00:18:16.481 --rc genhtml_function_coverage=1 00:18:16.481 --rc genhtml_legend=1 00:18:16.481 --rc geninfo_all_blocks=1 00:18:16.481 --rc geninfo_unexecuted_blocks=1 00:18:16.481 00:18:16.481 ' 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:16.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.481 --rc genhtml_branch_coverage=1 00:18:16.481 --rc genhtml_function_coverage=1 00:18:16.481 --rc genhtml_legend=1 00:18:16.481 --rc geninfo_all_blocks=1 00:18:16.481 --rc geninfo_unexecuted_blocks=1 00:18:16.481 00:18:16.481 ' 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:16.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.481 --rc genhtml_branch_coverage=1 00:18:16.481 --rc genhtml_function_coverage=1 00:18:16.481 --rc genhtml_legend=1 00:18:16.481 --rc geninfo_all_blocks=1 00:18:16.481 --rc geninfo_unexecuted_blocks=1 00:18:16.481 00:18:16.481 ' 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=76954 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 76954 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 76954 ']' 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.481 19:20:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:16.481 [2024-12-16 19:20:00.818836] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:16.482 [2024-12-16 19:20:00.819128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76954 ] 00:18:16.741 [2024-12-16 19:20:00.980123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:16.741 [2024-12-16 19:20:01.058616] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.741 [2024-12-16 19:20:01.058857] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.741 [2024-12-16 19:20:01.058884] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.306 19:20:01 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.306 19:20:01 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:18:17.306 19:20:01 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:17.306 19:20:01 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:17.306 19:20:01 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:17.306 19:20:01 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:17.306 19:20:01 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:17.306 19:20:01 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:17.564 19:20:01 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:17.564 19:20:01 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:17.564 19:20:01 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:17.564 19:20:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:17.564 19:20:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:17.564 19:20:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:17.564 19:20:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:17.564 19:20:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:17.823 19:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:17.823 { 00:18:17.823 "name": "nvme0n1", 00:18:17.823 "aliases": [ 00:18:17.823 "b6c6255d-d89e-401d-b680-e919dcf682e0" 00:18:17.823 ], 00:18:17.823 "product_name": "NVMe disk", 00:18:17.823 "block_size": 4096, 00:18:17.823 "num_blocks": 1310720, 00:18:17.823 "uuid": "b6c6255d-d89e-401d-b680-e919dcf682e0", 00:18:17.823 "numa_id": -1, 00:18:17.823 "assigned_rate_limits": { 00:18:17.823 "rw_ios_per_sec": 0, 00:18:17.823 "rw_mbytes_per_sec": 0, 00:18:17.823 "r_mbytes_per_sec": 0, 00:18:17.823 "w_mbytes_per_sec": 0 00:18:17.823 }, 00:18:17.823 "claimed": false, 00:18:17.823 "zoned": false, 00:18:17.823 "supported_io_types": { 00:18:17.823 "read": true, 00:18:17.823 "write": true, 00:18:17.823 "unmap": true, 00:18:17.823 "flush": true, 00:18:17.823 "reset": true, 00:18:17.823 "nvme_admin": true, 00:18:17.823 "nvme_io": true, 00:18:17.823 "nvme_io_md": false, 00:18:17.823 "write_zeroes": true, 00:18:17.823 "zcopy": false, 00:18:17.823 "get_zone_info": false, 00:18:17.823 "zone_management": false, 00:18:17.823 "zone_append": false, 00:18:17.823 "compare": true, 00:18:17.823 "compare_and_write": false, 00:18:17.823 "abort": true, 00:18:17.823 "seek_hole": false, 00:18:17.823 "seek_data": false, 00:18:17.823 "copy": true, 00:18:17.823 "nvme_iov_md": false 00:18:17.823 }, 00:18:17.823 "driver_specific": { 00:18:17.823 "nvme": [ 00:18:17.823 { 00:18:17.823 "pci_address": "0000:00:11.0", 00:18:17.823 "trid": { 00:18:17.823 "trtype": "PCIe", 00:18:17.823 "traddr": "0000:00:11.0" 00:18:17.823 }, 00:18:17.823 "ctrlr_data": { 00:18:17.823 "cntlid": 0, 00:18:17.823 "vendor_id": "0x1b36", 00:18:17.823 "model_number": "QEMU NVMe Ctrl", 00:18:17.823 "serial_number": "12341", 00:18:17.823 "firmware_revision": "8.0.0", 00:18:17.823 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:17.823 "oacs": { 00:18:17.823 "security": 0, 00:18:17.823 "format": 1, 00:18:17.823 "firmware": 0, 00:18:17.823 "ns_manage": 1 00:18:17.823 }, 00:18:17.823 "multi_ctrlr": false, 00:18:17.823 "ana_reporting": false 00:18:17.823 }, 00:18:17.823 "vs": { 00:18:17.823 "nvme_version": "1.4" 00:18:17.823 }, 00:18:17.823 "ns_data": { 00:18:17.823 "id": 1, 00:18:17.823 "can_share": false 00:18:17.823 } 00:18:17.823 } 00:18:17.823 ], 00:18:17.823 "mp_policy": "active_passive" 00:18:17.823 } 00:18:17.823 } 00:18:17.823 ]' 00:18:17.823 19:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:17.823 19:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:17.823 19:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:18.081 19:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:18.081 19:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:18.081 19:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:18:18.081 19:20:02 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:18.081 19:20:02 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:18.081 19:20:02 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:18.081 19:20:02 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:18.081 19:20:02 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:18.081 19:20:02 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:18.081 19:20:02 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:18.339 19:20:02 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=083bbe22-069f-4514-8e36-2d8fac5fe4f0 00:18:18.339 19:20:02 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 083bbe22-069f-4514-8e36-2d8fac5fe4f0 00:18:18.598 19:20:02 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=de325992-cabd-427c-b448-69f536894131 00:18:18.598 19:20:02 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 de325992-cabd-427c-b448-69f536894131 00:18:18.598 19:20:02 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:18.598 19:20:02 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:18.598 19:20:02 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=de325992-cabd-427c-b448-69f536894131 00:18:18.598 19:20:02 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:18.598 19:20:02 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size de325992-cabd-427c-b448-69f536894131 00:18:18.598 19:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=de325992-cabd-427c-b448-69f536894131 00:18:18.598 19:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:18.598 19:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:18.598 19:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:18.598 19:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b de325992-cabd-427c-b448-69f536894131 00:18:18.856 19:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:18.856 { 00:18:18.856 "name": "de325992-cabd-427c-b448-69f536894131", 00:18:18.856 "aliases": [ 00:18:18.856 "lvs/nvme0n1p0" 00:18:18.856 ], 00:18:18.856 "product_name": "Logical Volume", 00:18:18.856 "block_size": 4096, 00:18:18.856 "num_blocks": 26476544, 00:18:18.856 "uuid": "de325992-cabd-427c-b448-69f536894131", 00:18:18.856 "assigned_rate_limits": { 00:18:18.856 "rw_ios_per_sec": 0, 00:18:18.856 "rw_mbytes_per_sec": 0, 00:18:18.856 "r_mbytes_per_sec": 0, 00:18:18.856 "w_mbytes_per_sec": 0 00:18:18.856 }, 00:18:18.856 "claimed": false, 00:18:18.856 "zoned": false, 00:18:18.856 "supported_io_types": { 00:18:18.856 "read": true, 00:18:18.856 "write": true, 00:18:18.856 "unmap": true, 00:18:18.856 "flush": false, 00:18:18.856 "reset": true, 00:18:18.856 "nvme_admin": false, 00:18:18.856 "nvme_io": false, 00:18:18.856 "nvme_io_md": false, 00:18:18.856 "write_zeroes": true, 00:18:18.856 "zcopy": false, 00:18:18.856 "get_zone_info": false, 00:18:18.856 "zone_management": false, 00:18:18.856 "zone_append": false, 00:18:18.856 "compare": false, 00:18:18.856 "compare_and_write": false, 00:18:18.856 "abort": false, 00:18:18.856 "seek_hole": true, 00:18:18.856 "seek_data": true, 00:18:18.856 "copy": false, 00:18:18.856 "nvme_iov_md": false 00:18:18.856 }, 00:18:18.856 "driver_specific": { 00:18:18.856 "lvol": { 00:18:18.856 "lvol_store_uuid": "083bbe22-069f-4514-8e36-2d8fac5fe4f0", 00:18:18.856 "base_bdev": "nvme0n1", 00:18:18.856 "thin_provision": true, 00:18:18.856 "num_allocated_clusters": 0, 00:18:18.856 "snapshot": false, 00:18:18.856 "clone": false, 00:18:18.856 "esnap_clone": false 00:18:18.856 } 00:18:18.856 } 00:18:18.856 } 00:18:18.856 ]' 00:18:18.856 19:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:18.856 19:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:18.856 19:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:18.856 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:18.856 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:18.856 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:18.856 19:20:03 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:18.856 19:20:03 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:18.856 19:20:03 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:19.115 19:20:03 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:19.115 19:20:03 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:19.115 19:20:03 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size de325992-cabd-427c-b448-69f536894131 00:18:19.115 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=de325992-cabd-427c-b448-69f536894131 00:18:19.115 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:19.115 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:19.115 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:19.115 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b de325992-cabd-427c-b448-69f536894131 00:18:19.115 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:19.115 { 00:18:19.115 "name": "de325992-cabd-427c-b448-69f536894131", 00:18:19.115 "aliases": [ 00:18:19.115 "lvs/nvme0n1p0" 00:18:19.115 ], 00:18:19.115 "product_name": "Logical Volume", 00:18:19.115 "block_size": 4096, 00:18:19.115 "num_blocks": 26476544, 00:18:19.115 "uuid": "de325992-cabd-427c-b448-69f536894131", 00:18:19.115 "assigned_rate_limits": { 00:18:19.115 "rw_ios_per_sec": 0, 00:18:19.115 "rw_mbytes_per_sec": 0, 00:18:19.115 "r_mbytes_per_sec": 0, 00:18:19.115 "w_mbytes_per_sec": 0 00:18:19.115 }, 00:18:19.115 "claimed": false, 00:18:19.115 "zoned": false, 00:18:19.115 "supported_io_types": { 00:18:19.115 "read": true, 00:18:19.115 "write": true, 00:18:19.115 "unmap": true, 00:18:19.115 "flush": false, 00:18:19.115 "reset": true, 00:18:19.115 "nvme_admin": false, 00:18:19.115 "nvme_io": false, 00:18:19.115 "nvme_io_md": false, 00:18:19.115 "write_zeroes": true, 00:18:19.115 "zcopy": false, 00:18:19.115 "get_zone_info": false, 00:18:19.115 "zone_management": false, 00:18:19.115 "zone_append": false, 00:18:19.115 "compare": false, 00:18:19.115 "compare_and_write": false, 00:18:19.115 "abort": false, 00:18:19.115 "seek_hole": true, 00:18:19.115 "seek_data": true, 00:18:19.115 "copy": false, 00:18:19.115 "nvme_iov_md": false 00:18:19.115 }, 00:18:19.115 "driver_specific": { 00:18:19.115 "lvol": { 00:18:19.115 "lvol_store_uuid": "083bbe22-069f-4514-8e36-2d8fac5fe4f0", 00:18:19.115 "base_bdev": "nvme0n1", 00:18:19.115 "thin_provision": true, 00:18:19.115 "num_allocated_clusters": 0, 00:18:19.115 "snapshot": false, 00:18:19.115 "clone": false, 00:18:19.115 "esnap_clone": false 00:18:19.115 } 00:18:19.115 } 00:18:19.115 } 00:18:19.115 ]' 00:18:19.115 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:19.374 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:19.374 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:19.374 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:19.374 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:19.374 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:19.374 19:20:03 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:19.374 19:20:03 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:19.374 19:20:03 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:19.374 19:20:03 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:19.374 19:20:03 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:19.374 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:19.374 19:20:03 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size de325992-cabd-427c-b448-69f536894131 00:18:19.374 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=de325992-cabd-427c-b448-69f536894131 00:18:19.374 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:19.374 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:19.374 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:19.374 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b de325992-cabd-427c-b448-69f536894131 00:18:19.632 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:19.632 { 00:18:19.632 "name": "de325992-cabd-427c-b448-69f536894131", 00:18:19.632 "aliases": [ 00:18:19.632 "lvs/nvme0n1p0" 00:18:19.632 ], 00:18:19.632 "product_name": "Logical Volume", 00:18:19.632 "block_size": 4096, 00:18:19.632 "num_blocks": 26476544, 00:18:19.632 "uuid": "de325992-cabd-427c-b448-69f536894131", 00:18:19.632 "assigned_rate_limits": { 00:18:19.632 "rw_ios_per_sec": 0, 00:18:19.632 "rw_mbytes_per_sec": 0, 00:18:19.632 "r_mbytes_per_sec": 0, 00:18:19.632 "w_mbytes_per_sec": 0 00:18:19.632 }, 00:18:19.632 "claimed": false, 00:18:19.632 "zoned": false, 00:18:19.632 "supported_io_types": { 00:18:19.632 "read": true, 00:18:19.632 "write": true, 00:18:19.632 "unmap": true, 00:18:19.632 "flush": false, 00:18:19.632 "reset": true, 00:18:19.632 "nvme_admin": false, 00:18:19.632 "nvme_io": false, 00:18:19.632 "nvme_io_md": false, 00:18:19.632 "write_zeroes": true, 00:18:19.632 "zcopy": false, 00:18:19.632 "get_zone_info": false, 00:18:19.632 "zone_management": false, 00:18:19.632 "zone_append": false, 00:18:19.632 "compare": false, 00:18:19.632 "compare_and_write": false, 00:18:19.632 "abort": false, 00:18:19.632 "seek_hole": true, 00:18:19.632 "seek_data": true, 00:18:19.632 "copy": false, 00:18:19.632 "nvme_iov_md": false 00:18:19.632 }, 00:18:19.632 "driver_specific": { 00:18:19.632 "lvol": { 00:18:19.632 "lvol_store_uuid": "083bbe22-069f-4514-8e36-2d8fac5fe4f0", 00:18:19.632 "base_bdev": "nvme0n1", 00:18:19.632 "thin_provision": true, 00:18:19.632 "num_allocated_clusters": 0, 00:18:19.632 "snapshot": false, 00:18:19.632 "clone": false, 00:18:19.632 "esnap_clone": false 00:18:19.632 } 00:18:19.632 } 00:18:19.632 } 00:18:19.632 ]' 00:18:19.632 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:19.632 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:19.632 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:19.632 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:19.632 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:19.632 19:20:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:19.632 19:20:03 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:19.632 19:20:03 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:19.632 19:20:03 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d de325992-cabd-427c-b448-69f536894131 -c nvc0n1p0 --l2p_dram_limit 60 00:18:19.892 [2024-12-16 19:20:04.147475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.892 [2024-12-16 19:20:04.147513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:19.892 [2024-12-16 19:20:04.147525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:19.892 [2024-12-16 19:20:04.147532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.892 [2024-12-16 19:20:04.147579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.892 [2024-12-16 19:20:04.147588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:19.892 [2024-12-16 19:20:04.147595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:19.892 [2024-12-16 19:20:04.147601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.892 [2024-12-16 19:20:04.147635] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:19.892 [2024-12-16 19:20:04.148254] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:19.892 [2024-12-16 19:20:04.148269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.892 [2024-12-16 19:20:04.148275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:19.892 [2024-12-16 19:20:04.148284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:18:19.892 [2024-12-16 19:20:04.148289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.892 [2024-12-16 19:20:04.148368] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e5480353-163d-46c5-9adf-11d5221be82d 00:18:19.892 [2024-12-16 19:20:04.149385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.892 [2024-12-16 19:20:04.149491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:19.892 [2024-12-16 19:20:04.149503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:19.892 [2024-12-16 19:20:04.149511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.892 [2024-12-16 19:20:04.154696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.892 [2024-12-16 19:20:04.154724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:19.892 [2024-12-16 19:20:04.154732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.131 ms 00:18:19.892 [2024-12-16 19:20:04.154739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.892 [2024-12-16 19:20:04.154819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.892 [2024-12-16 19:20:04.154828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:19.892 [2024-12-16 19:20:04.154834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:19.892 [2024-12-16 19:20:04.154844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.892 [2024-12-16 19:20:04.154894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.892 [2024-12-16 19:20:04.154904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:19.892 [2024-12-16 19:20:04.154910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:19.892 [2024-12-16 19:20:04.154917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.892 [2024-12-16 19:20:04.154945] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:19.892 [2024-12-16 19:20:04.157825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.892 [2024-12-16 19:20:04.157848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:19.892 [2024-12-16 19:20:04.157859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.883 ms 00:18:19.892 [2024-12-16 19:20:04.157866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.892 [2024-12-16 19:20:04.157909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.892 [2024-12-16 19:20:04.157915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:19.892 [2024-12-16 19:20:04.157923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:19.892 [2024-12-16 19:20:04.157930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.892 [2024-12-16 19:20:04.157951] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:19.892 [2024-12-16 19:20:04.158070] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:19.892 [2024-12-16 19:20:04.158082] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:19.892 [2024-12-16 19:20:04.158091] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:19.892 [2024-12-16 19:20:04.158100] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:19.892 [2024-12-16 19:20:04.158106] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:19.892 [2024-12-16 19:20:04.158114] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:19.892 [2024-12-16 19:20:04.158120] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:19.892 [2024-12-16 19:20:04.158127] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:19.892 [2024-12-16 19:20:04.158132] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:19.892 [2024-12-16 19:20:04.158139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.892 [2024-12-16 19:20:04.158145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:19.892 [2024-12-16 19:20:04.158153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:18:19.892 [2024-12-16 19:20:04.158159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.892 [2024-12-16 19:20:04.158250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.892 [2024-12-16 19:20:04.158257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:19.892 [2024-12-16 19:20:04.158265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:19.892 [2024-12-16 19:20:04.158270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.892 [2024-12-16 19:20:04.158370] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:19.892 [2024-12-16 19:20:04.158378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:19.892 [2024-12-16 19:20:04.158393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:19.892 [2024-12-16 19:20:04.158399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.892 [2024-12-16 19:20:04.158406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:19.892 [2024-12-16 19:20:04.158411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:19.892 [2024-12-16 19:20:04.158418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:19.892 [2024-12-16 19:20:04.158423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:19.892 [2024-12-16 19:20:04.158430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:19.892 [2024-12-16 19:20:04.158435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:19.892 [2024-12-16 19:20:04.158442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:19.892 [2024-12-16 19:20:04.158447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:19.892 [2024-12-16 19:20:04.158457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:19.892 [2024-12-16 19:20:04.158461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:19.892 [2024-12-16 19:20:04.158469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:19.892 [2024-12-16 19:20:04.158474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.892 [2024-12-16 19:20:04.158481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:19.892 [2024-12-16 19:20:04.158486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:19.892 [2024-12-16 19:20:04.158493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.892 [2024-12-16 19:20:04.158498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:19.892 [2024-12-16 19:20:04.158504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:19.892 [2024-12-16 19:20:04.158509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.892 [2024-12-16 19:20:04.158515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:19.892 [2024-12-16 19:20:04.158520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:19.892 [2024-12-16 19:20:04.158526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.892 [2024-12-16 19:20:04.158532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:19.892 [2024-12-16 19:20:04.158538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:19.892 [2024-12-16 19:20:04.158543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.892 [2024-12-16 19:20:04.158549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:19.892 [2024-12-16 19:20:04.158554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:19.892 [2024-12-16 19:20:04.158560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.892 [2024-12-16 19:20:04.158565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:19.892 [2024-12-16 19:20:04.158572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:19.892 [2024-12-16 19:20:04.158588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:19.892 [2024-12-16 19:20:04.158595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:19.892 [2024-12-16 19:20:04.158600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:19.892 [2024-12-16 19:20:04.158606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:19.893 [2024-12-16 19:20:04.158612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:19.893 [2024-12-16 19:20:04.158619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:19.893 [2024-12-16 19:20:04.158623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.893 [2024-12-16 19:20:04.158630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:19.893 [2024-12-16 19:20:04.158635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:19.893 [2024-12-16 19:20:04.158641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.893 [2024-12-16 19:20:04.158645] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:19.893 [2024-12-16 19:20:04.158653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:19.893 [2024-12-16 19:20:04.158659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:19.893 [2024-12-16 19:20:04.158665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.893 [2024-12-16 19:20:04.158671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:19.893 [2024-12-16 19:20:04.158678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:19.893 [2024-12-16 19:20:04.158684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:19.893 [2024-12-16 19:20:04.158690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:19.893 [2024-12-16 19:20:04.158697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:19.893 [2024-12-16 19:20:04.158704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:19.893 [2024-12-16 19:20:04.158710] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:19.893 [2024-12-16 19:20:04.158718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:19.893 [2024-12-16 19:20:04.158724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:19.893 [2024-12-16 19:20:04.158731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:19.893 [2024-12-16 19:20:04.158736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:19.893 [2024-12-16 19:20:04.158743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:19.893 [2024-12-16 19:20:04.158748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:19.893 [2024-12-16 19:20:04.158755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:19.893 [2024-12-16 19:20:04.158761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:19.893 [2024-12-16 19:20:04.158768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:19.893 [2024-12-16 19:20:04.158773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:19.893 [2024-12-16 19:20:04.158781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:19.893 [2024-12-16 19:20:04.158786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:19.893 [2024-12-16 19:20:04.158793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:19.893 [2024-12-16 19:20:04.158799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:19.893 [2024-12-16 19:20:04.158805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:19.893 [2024-12-16 19:20:04.158811] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:19.893 [2024-12-16 19:20:04.158818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:19.893 [2024-12-16 19:20:04.158825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:19.893 [2024-12-16 19:20:04.158831] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:19.893 [2024-12-16 19:20:04.158837] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:19.893 [2024-12-16 19:20:04.158844] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:19.893 [2024-12-16 19:20:04.158850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.893 [2024-12-16 19:20:04.158867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:19.893 [2024-12-16 19:20:04.158872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:18:19.893 [2024-12-16 19:20:04.158880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.893 [2024-12-16 19:20:04.158945] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:19.893 [2024-12-16 19:20:04.158957] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:22.423 [2024-12-16 19:20:06.355706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.423 [2024-12-16 19:20:06.355765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:22.423 [2024-12-16 19:20:06.355780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2196.751 ms 00:18:22.423 [2024-12-16 19:20:06.355790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.423 [2024-12-16 19:20:06.381759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.423 [2024-12-16 19:20:06.381802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:22.423 [2024-12-16 19:20:06.381814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.776 ms 00:18:22.423 [2024-12-16 19:20:06.381823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.423 [2024-12-16 19:20:06.381950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.423 [2024-12-16 19:20:06.381962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:22.423 [2024-12-16 19:20:06.381971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:22.423 [2024-12-16 19:20:06.381982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.423 [2024-12-16 19:20:06.423380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.423 [2024-12-16 19:20:06.423423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:22.423 [2024-12-16 19:20:06.423438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.353 ms 00:18:22.423 [2024-12-16 19:20:06.423449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.423 [2024-12-16 19:20:06.423494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.423 [2024-12-16 19:20:06.423505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:22.423 [2024-12-16 19:20:06.423514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:22.423 [2024-12-16 19:20:06.423523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.423 [2024-12-16 19:20:06.423894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.423 [2024-12-16 19:20:06.423927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:22.423 [2024-12-16 19:20:06.423936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:18:22.423 [2024-12-16 19:20:06.423948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.423 [2024-12-16 19:20:06.424066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.423 [2024-12-16 19:20:06.424080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:22.423 [2024-12-16 19:20:06.424088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:18:22.423 [2024-12-16 19:20:06.424098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.423 [2024-12-16 19:20:06.438552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.423 [2024-12-16 19:20:06.438693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:22.423 [2024-12-16 19:20:06.438709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.425 ms 00:18:22.423 [2024-12-16 19:20:06.438719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.423 [2024-12-16 19:20:06.450055] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:22.423 [2024-12-16 19:20:06.464704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.423 [2024-12-16 19:20:06.464736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:22.423 [2024-12-16 19:20:06.464748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.892 ms 00:18:22.423 [2024-12-16 19:20:06.464757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.423 [2024-12-16 19:20:06.513318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.423 [2024-12-16 19:20:06.513463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:22.423 [2024-12-16 19:20:06.513486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.524 ms 00:18:22.424 [2024-12-16 19:20:06.513494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.424 [2024-12-16 19:20:06.513669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.424 [2024-12-16 19:20:06.513679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:22.424 [2024-12-16 19:20:06.513691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:18:22.424 [2024-12-16 19:20:06.513699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.424 [2024-12-16 19:20:06.536364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.424 [2024-12-16 19:20:06.536488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:22.424 [2024-12-16 19:20:06.536507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.612 ms 00:18:22.424 [2024-12-16 19:20:06.536515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.424 [2024-12-16 19:20:06.558705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.424 [2024-12-16 19:20:06.558735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:22.424 [2024-12-16 19:20:06.558747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.147 ms 00:18:22.424 [2024-12-16 19:20:06.558754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.424 [2024-12-16 19:20:06.559332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.424 [2024-12-16 19:20:06.559433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:22.424 [2024-12-16 19:20:06.559450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:18:22.424 [2024-12-16 19:20:06.559458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.424 [2024-12-16 19:20:06.622141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.424 [2024-12-16 19:20:06.622188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:22.424 [2024-12-16 19:20:06.622204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.638 ms 00:18:22.424 [2024-12-16 19:20:06.622228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.424 [2024-12-16 19:20:06.646246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.424 [2024-12-16 19:20:06.646277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:22.424 [2024-12-16 19:20:06.646290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.929 ms 00:18:22.424 [2024-12-16 19:20:06.646298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.424 [2024-12-16 19:20:06.669206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.424 [2024-12-16 19:20:06.669319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:22.424 [2024-12-16 19:20:06.669337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.860 ms 00:18:22.424 [2024-12-16 19:20:06.669345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.424 [2024-12-16 19:20:06.692471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.424 [2024-12-16 19:20:06.692582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:22.424 [2024-12-16 19:20:06.692600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.085 ms 00:18:22.424 [2024-12-16 19:20:06.692607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.424 [2024-12-16 19:20:06.692649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.424 [2024-12-16 19:20:06.692657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:22.424 [2024-12-16 19:20:06.692672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:22.424 [2024-12-16 19:20:06.692679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.424 [2024-12-16 19:20:06.692775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.424 [2024-12-16 19:20:06.692784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:22.424 [2024-12-16 19:20:06.692794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:22.424 [2024-12-16 19:20:06.692801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.424 [2024-12-16 19:20:06.693739] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2545.851 ms, result 0 00:18:22.424 { 00:18:22.424 "name": "ftl0", 00:18:22.424 "uuid": "e5480353-163d-46c5-9adf-11d5221be82d" 00:18:22.424 } 00:18:22.424 19:20:06 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:22.424 19:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:22.424 19:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:22.424 19:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:18:22.424 19:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:22.424 19:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:22.424 19:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:22.681 19:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:22.939 [ 00:18:22.939 { 00:18:22.939 "name": "ftl0", 00:18:22.939 "aliases": [ 00:18:22.939 "e5480353-163d-46c5-9adf-11d5221be82d" 00:18:22.939 ], 00:18:22.939 "product_name": "FTL disk", 00:18:22.939 "block_size": 4096, 00:18:22.939 "num_blocks": 20971520, 00:18:22.939 "uuid": "e5480353-163d-46c5-9adf-11d5221be82d", 00:18:22.939 "assigned_rate_limits": { 00:18:22.939 "rw_ios_per_sec": 0, 00:18:22.939 "rw_mbytes_per_sec": 0, 00:18:22.939 "r_mbytes_per_sec": 0, 00:18:22.939 "w_mbytes_per_sec": 0 00:18:22.939 }, 00:18:22.939 "claimed": false, 00:18:22.939 "zoned": false, 00:18:22.939 "supported_io_types": { 00:18:22.939 "read": true, 00:18:22.939 "write": true, 00:18:22.939 "unmap": true, 00:18:22.939 "flush": true, 00:18:22.939 "reset": false, 00:18:22.939 "nvme_admin": false, 00:18:22.939 "nvme_io": false, 00:18:22.939 "nvme_io_md": false, 00:18:22.939 "write_zeroes": true, 00:18:22.939 "zcopy": false, 00:18:22.939 "get_zone_info": false, 00:18:22.939 "zone_management": false, 00:18:22.939 "zone_append": false, 00:18:22.939 "compare": false, 00:18:22.939 "compare_and_write": false, 00:18:22.939 "abort": false, 00:18:22.939 "seek_hole": false, 00:18:22.939 "seek_data": false, 00:18:22.939 "copy": false, 00:18:22.939 "nvme_iov_md": false 00:18:22.939 }, 00:18:22.939 "driver_specific": { 00:18:22.939 "ftl": { 00:18:22.939 "base_bdev": "de325992-cabd-427c-b448-69f536894131", 00:18:22.939 "cache": "nvc0n1p0" 00:18:22.939 } 00:18:22.939 } 00:18:22.939 } 00:18:22.939 ] 00:18:22.939 19:20:07 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:18:22.939 19:20:07 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:22.939 19:20:07 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:22.939 19:20:07 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:22.939 19:20:07 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:23.198 [2024-12-16 19:20:07.446492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.198 [2024-12-16 19:20:07.446529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:23.198 [2024-12-16 19:20:07.446540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:23.198 [2024-12-16 19:20:07.446548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.198 [2024-12-16 19:20:07.446578] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:23.198 [2024-12-16 19:20:07.448671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.198 [2024-12-16 19:20:07.448695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:23.198 [2024-12-16 19:20:07.448705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.079 ms 00:18:23.198 [2024-12-16 19:20:07.448712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.198 [2024-12-16 19:20:07.449111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.198 [2024-12-16 19:20:07.449127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:23.198 [2024-12-16 19:20:07.449136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:18:23.198 [2024-12-16 19:20:07.449142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.198 [2024-12-16 19:20:07.451591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.198 [2024-12-16 19:20:07.451696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:23.198 [2024-12-16 19:20:07.451710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.427 ms 00:18:23.198 [2024-12-16 19:20:07.451716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.198 [2024-12-16 19:20:07.456373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.198 [2024-12-16 19:20:07.456394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:23.198 [2024-12-16 19:20:07.456404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.630 ms 00:18:23.198 [2024-12-16 19:20:07.456411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.198 [2024-12-16 19:20:07.474568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.198 [2024-12-16 19:20:07.474673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:23.198 [2024-12-16 19:20:07.474699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.083 ms 00:18:23.198 [2024-12-16 19:20:07.474704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.198 [2024-12-16 19:20:07.486318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.198 [2024-12-16 19:20:07.486346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:23.198 [2024-12-16 19:20:07.486359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.574 ms 00:18:23.198 [2024-12-16 19:20:07.486366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.198 [2024-12-16 19:20:07.486527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.198 [2024-12-16 19:20:07.486536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:23.198 [2024-12-16 19:20:07.486545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:18:23.198 [2024-12-16 19:20:07.486551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.198 [2024-12-16 19:20:07.504567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.198 [2024-12-16 19:20:07.504593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:23.198 [2024-12-16 19:20:07.504602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.990 ms 00:18:23.198 [2024-12-16 19:20:07.504608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.198 [2024-12-16 19:20:07.522418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.198 [2024-12-16 19:20:07.522443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:23.198 [2024-12-16 19:20:07.522453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.773 ms 00:18:23.198 [2024-12-16 19:20:07.522458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.198 [2024-12-16 19:20:07.539624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.198 [2024-12-16 19:20:07.539650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:23.198 [2024-12-16 19:20:07.539659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.125 ms 00:18:23.198 [2024-12-16 19:20:07.539664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.524 [2024-12-16 19:20:07.556879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.524 [2024-12-16 19:20:07.556905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:23.524 [2024-12-16 19:20:07.556915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.136 ms 00:18:23.524 [2024-12-16 19:20:07.556920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.524 [2024-12-16 19:20:07.556958] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:23.524 [2024-12-16 19:20:07.556968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.556976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.556982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.556990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.556995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:23.524 [2024-12-16 19:20:07.557262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:23.525 [2024-12-16 19:20:07.557672] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:23.525 [2024-12-16 19:20:07.557680] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e5480353-163d-46c5-9adf-11d5221be82d 00:18:23.525 [2024-12-16 19:20:07.557686] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:23.525 [2024-12-16 19:20:07.557694] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:23.525 [2024-12-16 19:20:07.557699] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:23.525 [2024-12-16 19:20:07.557708] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:23.525 [2024-12-16 19:20:07.557713] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:23.525 [2024-12-16 19:20:07.557720] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:23.525 [2024-12-16 19:20:07.557725] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:23.525 [2024-12-16 19:20:07.557731] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:23.525 [2024-12-16 19:20:07.557736] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:23.525 [2024-12-16 19:20:07.557743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.525 [2024-12-16 19:20:07.557749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:23.525 [2024-12-16 19:20:07.557757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.786 ms 00:18:23.525 [2024-12-16 19:20:07.557763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.525 [2024-12-16 19:20:07.567438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.526 [2024-12-16 19:20:07.567464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:23.526 [2024-12-16 19:20:07.567473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.644 ms 00:18:23.526 [2024-12-16 19:20:07.567479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.526 [2024-12-16 19:20:07.567762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.526 [2024-12-16 19:20:07.567773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:23.526 [2024-12-16 19:20:07.567780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:18:23.526 [2024-12-16 19:20:07.567786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.526 [2024-12-16 19:20:07.602585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.526 [2024-12-16 19:20:07.602612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:23.526 [2024-12-16 19:20:07.602621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.526 [2024-12-16 19:20:07.602627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.526 [2024-12-16 19:20:07.602680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.526 [2024-12-16 19:20:07.602687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:23.526 [2024-12-16 19:20:07.602694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.526 [2024-12-16 19:20:07.602700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.526 [2024-12-16 19:20:07.602768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.526 [2024-12-16 19:20:07.602777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:23.526 [2024-12-16 19:20:07.602785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.526 [2024-12-16 19:20:07.602791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.526 [2024-12-16 19:20:07.602821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.526 [2024-12-16 19:20:07.602827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:23.526 [2024-12-16 19:20:07.602834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.526 [2024-12-16 19:20:07.602840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.526 [2024-12-16 19:20:07.665326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.526 [2024-12-16 19:20:07.665368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:23.526 [2024-12-16 19:20:07.665379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.526 [2024-12-16 19:20:07.665386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.526 [2024-12-16 19:20:07.713679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.526 [2024-12-16 19:20:07.713716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:23.526 [2024-12-16 19:20:07.713726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.526 [2024-12-16 19:20:07.713732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.526 [2024-12-16 19:20:07.713812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.526 [2024-12-16 19:20:07.713819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:23.526 [2024-12-16 19:20:07.713828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.526 [2024-12-16 19:20:07.713834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.526 [2024-12-16 19:20:07.713899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.526 [2024-12-16 19:20:07.713907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:23.526 [2024-12-16 19:20:07.713914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.526 [2024-12-16 19:20:07.713919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.526 [2024-12-16 19:20:07.714006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.526 [2024-12-16 19:20:07.714013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:23.526 [2024-12-16 19:20:07.714021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.526 [2024-12-16 19:20:07.714028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.526 [2024-12-16 19:20:07.714075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.526 [2024-12-16 19:20:07.714082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:23.526 [2024-12-16 19:20:07.714089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.526 [2024-12-16 19:20:07.714095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.526 [2024-12-16 19:20:07.714132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.526 [2024-12-16 19:20:07.714138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:23.526 [2024-12-16 19:20:07.714146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.526 [2024-12-16 19:20:07.714151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.526 [2024-12-16 19:20:07.714210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.526 [2024-12-16 19:20:07.714218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:23.526 [2024-12-16 19:20:07.714226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.526 [2024-12-16 19:20:07.714231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.526 [2024-12-16 19:20:07.714376] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 267.855 ms, result 0 00:18:23.526 true 00:18:23.526 19:20:07 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 76954 00:18:23.526 19:20:07 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 76954 ']' 00:18:23.526 19:20:07 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 76954 00:18:23.526 19:20:07 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:23.526 19:20:07 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.526 19:20:07 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76954 00:18:23.526 killing process with pid 76954 00:18:23.526 19:20:07 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.526 19:20:07 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.526 19:20:07 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76954' 00:18:23.526 19:20:07 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 76954 00:18:23.526 19:20:07 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 76954 00:18:28.794 19:20:12 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:28.794 19:20:12 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:28.794 19:20:12 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:28.794 19:20:12 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.794 19:20:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:28.794 19:20:12 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:28.794 19:20:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:28.794 19:20:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:28.794 19:20:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:28.794 19:20:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:28.794 19:20:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:28.794 19:20:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:28.794 19:20:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:28.794 19:20:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:28.794 19:20:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:28.794 19:20:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:28.794 19:20:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:28.794 19:20:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:28.794 19:20:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:28.794 19:20:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:28.794 19:20:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:28.794 19:20:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:28.794 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:28.794 fio-3.35 00:18:28.794 Starting 1 thread 00:18:34.084 00:18:34.085 test: (groupid=0, jobs=1): err= 0: pid=77123: Mon Dec 16 19:20:18 2024 00:18:34.085 read: IOPS=978, BW=65.0MiB/s (68.2MB/s)(255MiB/3916msec) 00:18:34.085 slat (nsec): min=2949, max=44989, avg=4901.52, stdev=2921.99 00:18:34.085 clat (usec): min=249, max=1256, avg=458.31, stdev=166.03 00:18:34.085 lat (usec): min=252, max=1275, avg=463.21, stdev=167.09 00:18:34.085 clat percentiles (usec): 00:18:34.085 | 1.00th=[ 310], 5.00th=[ 314], 10.00th=[ 314], 20.00th=[ 318], 00:18:34.085 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 433], 60.00th=[ 478], 00:18:34.085 | 70.00th=[ 523], 80.00th=[ 553], 90.00th=[ 627], 95.00th=[ 873], 00:18:34.085 | 99.00th=[ 988], 99.50th=[ 1074], 99.90th=[ 1188], 99.95th=[ 1237], 00:18:34.085 | 99.99th=[ 1254] 00:18:34.085 write: IOPS=985, BW=65.5MiB/s (68.6MB/s)(256MiB/3912msec); 0 zone resets 00:18:34.085 slat (nsec): min=13475, max=95169, avg=19593.02, stdev=5766.02 00:18:34.085 clat (usec): min=290, max=1899, avg=521.93, stdev=204.76 00:18:34.085 lat (usec): min=313, max=1919, avg=541.52, stdev=207.02 00:18:34.085 clat percentiles (usec): 00:18:34.085 | 1.00th=[ 334], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 347], 00:18:34.085 | 30.00th=[ 351], 40.00th=[ 371], 50.00th=[ 494], 60.00th=[ 562], 00:18:34.085 | 70.00th=[ 611], 80.00th=[ 635], 90.00th=[ 725], 95.00th=[ 963], 00:18:34.085 | 99.00th=[ 1123], 99.50th=[ 1532], 99.90th=[ 1745], 99.95th=[ 1844], 00:18:34.085 | 99.99th=[ 1893] 00:18:34.085 bw ( KiB/s): min=53040, max=95608, per=100.00%, avg=69262.86, stdev=15996.77, samples=7 00:18:34.085 iops : min= 780, max= 1406, avg=1018.57, stdev=235.25, samples=7 00:18:34.085 lat (usec) : 250=0.01%, 500=57.55%, 750=34.18%, 1000=6.32% 00:18:34.085 lat (msec) : 2=1.94% 00:18:34.085 cpu : usr=99.13%, sys=0.08%, ctx=7, majf=0, minf=1167 00:18:34.085 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:34.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.085 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.085 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.085 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:34.085 00:18:34.085 Run status group 0 (all jobs): 00:18:34.085 READ: bw=65.0MiB/s (68.2MB/s), 65.0MiB/s-65.0MiB/s (68.2MB/s-68.2MB/s), io=255MiB (267MB), run=3916-3916msec 00:18:34.085 WRITE: bw=65.5MiB/s (68.6MB/s), 65.5MiB/s-65.5MiB/s (68.6MB/s-68.6MB/s), io=256MiB (269MB), run=3912-3912msec 00:18:35.471 ----------------------------------------------------- 00:18:35.471 Suppressions used: 00:18:35.471 count bytes template 00:18:35.471 1 5 /usr/src/fio/parse.c 00:18:35.471 1 8 libtcmalloc_minimal.so 00:18:35.471 1 904 libcrypto.so 00:18:35.471 ----------------------------------------------------- 00:18:35.471 00:18:35.471 19:20:19 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:35.471 19:20:19 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:35.471 19:20:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:35.471 19:20:19 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:35.471 19:20:19 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:35.471 19:20:19 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:35.471 19:20:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:35.471 19:20:19 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:35.471 19:20:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:35.471 19:20:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:35.471 19:20:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:35.471 19:20:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:35.471 19:20:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:35.471 19:20:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:35.471 19:20:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:35.471 19:20:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:35.471 19:20:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:35.471 19:20:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:35.471 19:20:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:35.471 19:20:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:35.472 19:20:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:35.472 19:20:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:35.472 19:20:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:35.472 19:20:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:35.472 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:35.472 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:35.472 fio-3.35 00:18:35.472 Starting 2 threads 00:19:02.083 00:19:02.083 first_half: (groupid=0, jobs=1): err= 0: pid=77220: Mon Dec 16 19:20:42 2024 00:19:02.083 read: IOPS=3043, BW=11.9MiB/s (12.5MB/s)(256MiB/21516msec) 00:19:02.083 slat (usec): min=2, max=582, avg= 5.42, stdev= 3.63 00:19:02.083 clat (usec): min=1683, max=279994, avg=35941.67, stdev=22367.36 00:19:02.083 lat (usec): min=1686, max=279999, avg=35947.09, stdev=22367.46 00:19:02.083 clat percentiles (msec): 00:19:02.083 | 1.00th=[ 14], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 30], 00:19:02.083 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:19:02.083 | 70.00th=[ 33], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 70], 00:19:02.083 | 99.00th=[ 150], 99.50th=[ 165], 99.90th=[ 211], 99.95th=[ 249], 00:19:02.083 | 99.99th=[ 275] 00:19:02.083 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(256MiB/21465msec); 0 zone resets 00:19:02.083 slat (usec): min=3, max=358, avg= 6.62, stdev= 3.15 00:19:02.083 clat (usec): min=391, max=43901, avg=6085.76, stdev=5781.68 00:19:02.083 lat (usec): min=398, max=43907, avg=6092.38, stdev=5781.81 00:19:02.083 clat percentiles (usec): 00:19:02.083 | 1.00th=[ 685], 5.00th=[ 889], 10.00th=[ 1221], 20.00th=[ 2769], 00:19:02.083 | 30.00th=[ 3523], 40.00th=[ 4228], 50.00th=[ 4817], 60.00th=[ 5276], 00:19:02.083 | 70.00th=[ 5932], 80.00th=[ 7177], 90.00th=[11076], 95.00th=[18220], 00:19:02.083 | 99.00th=[30278], 99.50th=[32375], 99.90th=[41681], 99.95th=[42206], 00:19:02.083 | 99.99th=[43254] 00:19:02.084 bw ( KiB/s): min= 992, max=53728, per=96.93%, avg=23675.77, stdev=15656.48, samples=22 00:19:02.084 iops : min= 248, max=13432, avg=5918.91, stdev=3914.08, samples=22 00:19:02.084 lat (usec) : 500=0.01%, 750=0.93%, 1000=2.71% 00:19:02.084 lat (msec) : 2=3.24%, 4=11.64%, 10=24.86%, 20=5.68%, 50=47.69% 00:19:02.084 lat (msec) : 100=1.47%, 250=1.75%, 500=0.02% 00:19:02.084 cpu : usr=98.77%, sys=0.28%, ctx=128, majf=0, minf=5548 00:19:02.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:02.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.084 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:02.084 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:02.084 second_half: (groupid=0, jobs=1): err= 0: pid=77221: Mon Dec 16 19:20:42 2024 00:19:02.084 read: IOPS=3084, BW=12.0MiB/s (12.6MB/s)(256MiB/21230msec) 00:19:02.084 slat (nsec): min=3133, max=20851, avg=4147.25, stdev=929.81 00:19:02.084 clat (msec): min=8, max=217, avg=36.01, stdev=20.17 00:19:02.084 lat (msec): min=8, max=217, avg=36.01, stdev=20.17 00:19:02.084 clat percentiles (msec): 00:19:02.084 | 1.00th=[ 26], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 30], 00:19:02.084 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:19:02.084 | 70.00th=[ 33], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 66], 00:19:02.084 | 99.00th=[ 142], 99.50th=[ 155], 99.90th=[ 176], 99.95th=[ 209], 00:19:02.084 | 99.99th=[ 218] 00:19:02.084 write: IOPS=3319, BW=13.0MiB/s (13.6MB/s)(256MiB/19740msec); 0 zone resets 00:19:02.084 slat (usec): min=3, max=597, avg= 5.49, stdev= 4.35 00:19:02.084 clat (usec): min=374, max=39941, avg=5464.19, stdev=3428.81 00:19:02.084 lat (usec): min=382, max=39955, avg=5469.68, stdev=3429.14 00:19:02.084 clat percentiles (usec): 00:19:02.084 | 1.00th=[ 914], 5.00th=[ 1582], 10.00th=[ 2409], 20.00th=[ 3064], 00:19:02.084 | 30.00th=[ 3589], 40.00th=[ 4228], 50.00th=[ 4817], 60.00th=[ 5342], 00:19:02.084 | 70.00th=[ 5735], 80.00th=[ 6915], 90.00th=[10421], 95.00th=[11338], 00:19:02.084 | 99.00th=[19006], 99.50th=[24511], 99.90th=[28181], 99.95th=[31065], 00:19:02.084 | 99.99th=[38536] 00:19:02.084 bw ( KiB/s): min= 2736, max=43520, per=100.00%, avg=27594.11, stdev=13658.81, samples=19 00:19:02.084 iops : min= 684, max=10880, avg=6898.42, stdev=3414.74, samples=19 00:19:02.084 lat (usec) : 500=0.02%, 750=0.20%, 1000=0.50% 00:19:02.084 lat (msec) : 2=2.58%, 4=14.94%, 10=26.01%, 20=5.39%, 50=47.12% 00:19:02.084 lat (msec) : 100=1.57%, 250=1.67% 00:19:02.084 cpu : usr=99.32%, sys=0.10%, ctx=32, majf=0, minf=5563 00:19:02.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:02.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.084 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:02.084 issued rwts: total=65488,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:02.084 00:19:02.084 Run status group 0 (all jobs): 00:19:02.084 READ: bw=23.8MiB/s (24.9MB/s), 11.9MiB/s-12.0MiB/s (12.5MB/s-12.6MB/s), io=512MiB (536MB), run=21230-21516msec 00:19:02.084 WRITE: bw=23.9MiB/s (25.0MB/s), 11.9MiB/s-13.0MiB/s (12.5MB/s-13.6MB/s), io=512MiB (537MB), run=19740-21465msec 00:19:02.084 ----------------------------------------------------- 00:19:02.084 Suppressions used: 00:19:02.084 count bytes template 00:19:02.084 2 10 /usr/src/fio/parse.c 00:19:02.084 3 288 /usr/src/fio/iolog.c 00:19:02.084 1 8 libtcmalloc_minimal.so 00:19:02.084 1 904 libcrypto.so 00:19:02.084 ----------------------------------------------------- 00:19:02.084 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:02.084 19:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:02.084 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:02.084 fio-3.35 00:19:02.084 Starting 1 thread 00:19:16.989 00:19:16.989 test: (groupid=0, jobs=1): err= 0: pid=77512: Mon Dec 16 19:21:00 2024 00:19:16.989 read: IOPS=7908, BW=30.9MiB/s (32.4MB/s)(255MiB/8245msec) 00:19:16.989 slat (nsec): min=3140, max=26798, avg=4846.02, stdev=1141.87 00:19:16.989 clat (usec): min=541, max=31804, avg=16177.96, stdev=1672.94 00:19:16.989 lat (usec): min=545, max=31808, avg=16182.80, stdev=1672.95 00:19:16.989 clat percentiles (usec): 00:19:16.989 | 1.00th=[14484], 5.00th=[14877], 10.00th=[15139], 20.00th=[15270], 00:19:16.989 | 30.00th=[15401], 40.00th=[15664], 50.00th=[15795], 60.00th=[16057], 00:19:16.989 | 70.00th=[16319], 80.00th=[16450], 90.00th=[17171], 95.00th=[18744], 00:19:16.989 | 99.00th=[24511], 99.50th=[25560], 99.90th=[26084], 99.95th=[27919], 00:19:16.989 | 99.99th=[31065] 00:19:16.989 write: IOPS=10.5k, BW=41.2MiB/s (43.1MB/s)(256MiB/6221msec); 0 zone resets 00:19:16.989 slat (usec): min=4, max=163, avg= 7.36, stdev= 3.38 00:19:16.989 clat (usec): min=527, max=57109, avg=12098.98, stdev=12612.58 00:19:16.989 lat (usec): min=534, max=57116, avg=12106.34, stdev=12612.60 00:19:16.989 clat percentiles (usec): 00:19:16.989 | 1.00th=[ 750], 5.00th=[ 996], 10.00th=[ 1123], 20.00th=[ 1287], 00:19:16.989 | 30.00th=[ 1467], 40.00th=[ 1811], 50.00th=[10028], 60.00th=[12125], 00:19:16.989 | 70.00th=[14746], 80.00th=[17957], 90.00th=[36963], 95.00th=[39060], 00:19:16.989 | 99.00th=[41681], 99.50th=[42730], 99.90th=[49546], 99.95th=[51119], 00:19:16.989 | 99.99th=[54264] 00:19:16.989 bw ( KiB/s): min=18656, max=46936, per=95.71%, avg=40329.85, stdev=8091.19, samples=13 00:19:16.989 iops : min= 4664, max=11734, avg=10082.46, stdev=2022.80, samples=13 00:19:16.989 lat (usec) : 750=0.51%, 1000=2.08% 00:19:16.989 lat (msec) : 2=17.90%, 4=0.54%, 10=4.08%, 20=64.43%, 50=10.42% 00:19:16.989 lat (msec) : 100=0.04% 00:19:16.989 cpu : usr=98.96%, sys=0.25%, ctx=21, majf=0, minf=5563 00:19:16.989 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:16.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.989 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:16.989 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.989 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:16.989 00:19:16.989 Run status group 0 (all jobs): 00:19:16.989 READ: bw=30.9MiB/s (32.4MB/s), 30.9MiB/s-30.9MiB/s (32.4MB/s-32.4MB/s), io=255MiB (267MB), run=8245-8245msec 00:19:16.989 WRITE: bw=41.2MiB/s (43.1MB/s), 41.2MiB/s-41.2MiB/s (43.1MB/s-43.1MB/s), io=256MiB (268MB), run=6221-6221msec 00:19:17.933 ----------------------------------------------------- 00:19:17.933 Suppressions used: 00:19:17.933 count bytes template 00:19:17.933 1 5 /usr/src/fio/parse.c 00:19:17.933 2 192 /usr/src/fio/iolog.c 00:19:17.933 1 8 libtcmalloc_minimal.so 00:19:17.933 1 904 libcrypto.so 00:19:17.933 ----------------------------------------------------- 00:19:17.933 00:19:17.933 19:21:02 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:17.933 19:21:02 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:17.933 19:21:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:17.933 19:21:02 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:18.195 Remove shared memory files 00:19:18.195 19:21:02 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:18.195 19:21:02 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:18.195 19:21:02 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:18.195 19:21:02 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:18.195 19:21:02 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58980 /dev/shm/spdk_tgt_trace.pid75871 00:19:18.195 19:21:02 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:18.195 19:21:02 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:18.195 ************************************ 00:19:18.195 END TEST ftl_fio_basic 00:19:18.195 ************************************ 00:19:18.195 00:19:18.195 real 1m1.722s 00:19:18.195 user 2m3.702s 00:19:18.195 sys 0m11.231s 00:19:18.195 19:21:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.195 19:21:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:18.195 19:21:02 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:18.195 19:21:02 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:18.195 19:21:02 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.195 19:21:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:18.195 ************************************ 00:19:18.195 START TEST ftl_bdevperf 00:19:18.195 ************************************ 00:19:18.195 19:21:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:18.195 * Looking for test storage... 00:19:18.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:18.195 19:21:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:18.195 19:21:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:18.195 19:21:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:18.195 19:21:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:18.195 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:18.195 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:18.195 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:18.195 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:18.195 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:18.195 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:18.195 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:18.195 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:18.195 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:18.195 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:18.195 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:18.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.196 --rc genhtml_branch_coverage=1 00:19:18.196 --rc genhtml_function_coverage=1 00:19:18.196 --rc genhtml_legend=1 00:19:18.196 --rc geninfo_all_blocks=1 00:19:18.196 --rc geninfo_unexecuted_blocks=1 00:19:18.196 00:19:18.196 ' 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:18.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.196 --rc genhtml_branch_coverage=1 00:19:18.196 --rc genhtml_function_coverage=1 00:19:18.196 --rc genhtml_legend=1 00:19:18.196 --rc geninfo_all_blocks=1 00:19:18.196 --rc geninfo_unexecuted_blocks=1 00:19:18.196 00:19:18.196 ' 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:18.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.196 --rc genhtml_branch_coverage=1 00:19:18.196 --rc genhtml_function_coverage=1 00:19:18.196 --rc genhtml_legend=1 00:19:18.196 --rc geninfo_all_blocks=1 00:19:18.196 --rc geninfo_unexecuted_blocks=1 00:19:18.196 00:19:18.196 ' 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:18.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.196 --rc genhtml_branch_coverage=1 00:19:18.196 --rc genhtml_function_coverage=1 00:19:18.196 --rc genhtml_legend=1 00:19:18.196 --rc geninfo_all_blocks=1 00:19:18.196 --rc geninfo_unexecuted_blocks=1 00:19:18.196 00:19:18.196 ' 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77761 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77761 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77761 ']' 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.196 19:21:02 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:18.457 [2024-12-16 19:21:02.596049] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:18.457 [2024-12-16 19:21:02.596347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77761 ] 00:19:18.457 [2024-12-16 19:21:02.752156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.719 [2024-12-16 19:21:02.877732] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.290 19:21:03 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.290 19:21:03 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:19.290 19:21:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:19.290 19:21:03 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:19.291 19:21:03 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:19.291 19:21:03 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:19.291 19:21:03 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:19.291 19:21:03 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:19.552 19:21:03 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:19.552 19:21:03 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:19.552 19:21:03 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:19.552 19:21:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:19.552 19:21:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:19.552 19:21:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:19.552 19:21:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:19.552 19:21:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:19.813 19:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:19.813 { 00:19:19.814 "name": "nvme0n1", 00:19:19.814 "aliases": [ 00:19:19.814 "948132f0-78ec-4bf9-885c-9eb8028e5da6" 00:19:19.814 ], 00:19:19.814 "product_name": "NVMe disk", 00:19:19.814 "block_size": 4096, 00:19:19.814 "num_blocks": 1310720, 00:19:19.814 "uuid": "948132f0-78ec-4bf9-885c-9eb8028e5da6", 00:19:19.814 "numa_id": -1, 00:19:19.814 "assigned_rate_limits": { 00:19:19.814 "rw_ios_per_sec": 0, 00:19:19.814 "rw_mbytes_per_sec": 0, 00:19:19.814 "r_mbytes_per_sec": 0, 00:19:19.814 "w_mbytes_per_sec": 0 00:19:19.814 }, 00:19:19.814 "claimed": true, 00:19:19.814 "claim_type": "read_many_write_one", 00:19:19.814 "zoned": false, 00:19:19.814 "supported_io_types": { 00:19:19.814 "read": true, 00:19:19.814 "write": true, 00:19:19.814 "unmap": true, 00:19:19.814 "flush": true, 00:19:19.814 "reset": true, 00:19:19.814 "nvme_admin": true, 00:19:19.814 "nvme_io": true, 00:19:19.814 "nvme_io_md": false, 00:19:19.814 "write_zeroes": true, 00:19:19.814 "zcopy": false, 00:19:19.814 "get_zone_info": false, 00:19:19.814 "zone_management": false, 00:19:19.814 "zone_append": false, 00:19:19.814 "compare": true, 00:19:19.814 "compare_and_write": false, 00:19:19.814 "abort": true, 00:19:19.814 "seek_hole": false, 00:19:19.814 "seek_data": false, 00:19:19.814 "copy": true, 00:19:19.814 "nvme_iov_md": false 00:19:19.814 }, 00:19:19.814 "driver_specific": { 00:19:19.814 "nvme": [ 00:19:19.814 { 00:19:19.814 "pci_address": "0000:00:11.0", 00:19:19.814 "trid": { 00:19:19.814 "trtype": "PCIe", 00:19:19.814 "traddr": "0000:00:11.0" 00:19:19.814 }, 00:19:19.814 "ctrlr_data": { 00:19:19.814 "cntlid": 0, 00:19:19.814 "vendor_id": "0x1b36", 00:19:19.814 "model_number": "QEMU NVMe Ctrl", 00:19:19.814 "serial_number": "12341", 00:19:19.814 "firmware_revision": "8.0.0", 00:19:19.814 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:19.814 "oacs": { 00:19:19.814 "security": 0, 00:19:19.814 "format": 1, 00:19:19.814 "firmware": 0, 00:19:19.814 "ns_manage": 1 00:19:19.814 }, 00:19:19.814 "multi_ctrlr": false, 00:19:19.814 "ana_reporting": false 00:19:19.814 }, 00:19:19.814 "vs": { 00:19:19.814 "nvme_version": "1.4" 00:19:19.814 }, 00:19:19.814 "ns_data": { 00:19:19.814 "id": 1, 00:19:19.814 "can_share": false 00:19:19.814 } 00:19:19.814 } 00:19:19.814 ], 00:19:19.814 "mp_policy": "active_passive" 00:19:19.814 } 00:19:19.814 } 00:19:19.814 ]' 00:19:19.814 19:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:19.814 19:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:19.814 19:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:19.814 19:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:19.814 19:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:19.814 19:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:19.814 19:21:04 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:19.814 19:21:04 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:19.814 19:21:04 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:19.814 19:21:04 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:19.814 19:21:04 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:20.075 19:21:04 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=083bbe22-069f-4514-8e36-2d8fac5fe4f0 00:19:20.075 19:21:04 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:20.075 19:21:04 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 083bbe22-069f-4514-8e36-2d8fac5fe4f0 00:19:20.336 19:21:04 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:20.337 19:21:04 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=c787b26c-0942-49e8-838b-7f2a58cf62f5 00:19:20.337 19:21:04 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c787b26c-0942-49e8-838b-7f2a58cf62f5 00:19:20.598 19:21:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=b4e041e5-d895-47d3-b528-2cfedaffd8dd 00:19:20.598 19:21:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b4e041e5-d895-47d3-b528-2cfedaffd8dd 00:19:20.598 19:21:04 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:20.598 19:21:04 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:20.598 19:21:04 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=b4e041e5-d895-47d3-b528-2cfedaffd8dd 00:19:20.598 19:21:04 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:20.598 19:21:04 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size b4e041e5-d895-47d3-b528-2cfedaffd8dd 00:19:20.598 19:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=b4e041e5-d895-47d3-b528-2cfedaffd8dd 00:19:20.598 19:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:20.598 19:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:20.598 19:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:20.598 19:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b4e041e5-d895-47d3-b528-2cfedaffd8dd 00:19:20.861 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:20.861 { 00:19:20.861 "name": "b4e041e5-d895-47d3-b528-2cfedaffd8dd", 00:19:20.861 "aliases": [ 00:19:20.861 "lvs/nvme0n1p0" 00:19:20.861 ], 00:19:20.861 "product_name": "Logical Volume", 00:19:20.861 "block_size": 4096, 00:19:20.861 "num_blocks": 26476544, 00:19:20.861 "uuid": "b4e041e5-d895-47d3-b528-2cfedaffd8dd", 00:19:20.861 "assigned_rate_limits": { 00:19:20.861 "rw_ios_per_sec": 0, 00:19:20.861 "rw_mbytes_per_sec": 0, 00:19:20.861 "r_mbytes_per_sec": 0, 00:19:20.861 "w_mbytes_per_sec": 0 00:19:20.861 }, 00:19:20.861 "claimed": false, 00:19:20.861 "zoned": false, 00:19:20.861 "supported_io_types": { 00:19:20.861 "read": true, 00:19:20.861 "write": true, 00:19:20.861 "unmap": true, 00:19:20.861 "flush": false, 00:19:20.861 "reset": true, 00:19:20.861 "nvme_admin": false, 00:19:20.861 "nvme_io": false, 00:19:20.861 "nvme_io_md": false, 00:19:20.861 "write_zeroes": true, 00:19:20.861 "zcopy": false, 00:19:20.861 "get_zone_info": false, 00:19:20.861 "zone_management": false, 00:19:20.861 "zone_append": false, 00:19:20.861 "compare": false, 00:19:20.861 "compare_and_write": false, 00:19:20.861 "abort": false, 00:19:20.861 "seek_hole": true, 00:19:20.861 "seek_data": true, 00:19:20.861 "copy": false, 00:19:20.861 "nvme_iov_md": false 00:19:20.861 }, 00:19:20.861 "driver_specific": { 00:19:20.861 "lvol": { 00:19:20.861 "lvol_store_uuid": "c787b26c-0942-49e8-838b-7f2a58cf62f5", 00:19:20.861 "base_bdev": "nvme0n1", 00:19:20.861 "thin_provision": true, 00:19:20.861 "num_allocated_clusters": 0, 00:19:20.861 "snapshot": false, 00:19:20.861 "clone": false, 00:19:20.861 "esnap_clone": false 00:19:20.861 } 00:19:20.861 } 00:19:20.861 } 00:19:20.861 ]' 00:19:20.861 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:20.861 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:20.861 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:20.861 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:20.861 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:20.861 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:20.861 19:21:05 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:20.861 19:21:05 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:20.861 19:21:05 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:21.432 19:21:05 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:21.432 19:21:05 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:21.432 19:21:05 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size b4e041e5-d895-47d3-b528-2cfedaffd8dd 00:19:21.432 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=b4e041e5-d895-47d3-b528-2cfedaffd8dd 00:19:21.432 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:21.432 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:21.432 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:21.432 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b4e041e5-d895-47d3-b528-2cfedaffd8dd 00:19:21.432 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:21.432 { 00:19:21.432 "name": "b4e041e5-d895-47d3-b528-2cfedaffd8dd", 00:19:21.432 "aliases": [ 00:19:21.432 "lvs/nvme0n1p0" 00:19:21.432 ], 00:19:21.432 "product_name": "Logical Volume", 00:19:21.432 "block_size": 4096, 00:19:21.432 "num_blocks": 26476544, 00:19:21.432 "uuid": "b4e041e5-d895-47d3-b528-2cfedaffd8dd", 00:19:21.432 "assigned_rate_limits": { 00:19:21.432 "rw_ios_per_sec": 0, 00:19:21.432 "rw_mbytes_per_sec": 0, 00:19:21.432 "r_mbytes_per_sec": 0, 00:19:21.432 "w_mbytes_per_sec": 0 00:19:21.432 }, 00:19:21.432 "claimed": false, 00:19:21.432 "zoned": false, 00:19:21.432 "supported_io_types": { 00:19:21.432 "read": true, 00:19:21.432 "write": true, 00:19:21.432 "unmap": true, 00:19:21.432 "flush": false, 00:19:21.432 "reset": true, 00:19:21.432 "nvme_admin": false, 00:19:21.432 "nvme_io": false, 00:19:21.432 "nvme_io_md": false, 00:19:21.432 "write_zeroes": true, 00:19:21.432 "zcopy": false, 00:19:21.432 "get_zone_info": false, 00:19:21.433 "zone_management": false, 00:19:21.433 "zone_append": false, 00:19:21.433 "compare": false, 00:19:21.433 "compare_and_write": false, 00:19:21.433 "abort": false, 00:19:21.433 "seek_hole": true, 00:19:21.433 "seek_data": true, 00:19:21.433 "copy": false, 00:19:21.433 "nvme_iov_md": false 00:19:21.433 }, 00:19:21.433 "driver_specific": { 00:19:21.433 "lvol": { 00:19:21.433 "lvol_store_uuid": "c787b26c-0942-49e8-838b-7f2a58cf62f5", 00:19:21.433 "base_bdev": "nvme0n1", 00:19:21.433 "thin_provision": true, 00:19:21.433 "num_allocated_clusters": 0, 00:19:21.433 "snapshot": false, 00:19:21.433 "clone": false, 00:19:21.433 "esnap_clone": false 00:19:21.433 } 00:19:21.433 } 00:19:21.433 } 00:19:21.433 ]' 00:19:21.433 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:21.433 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:21.433 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:21.433 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:21.433 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:21.433 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:21.433 19:21:05 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:21.433 19:21:05 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:21.694 19:21:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:21.694 19:21:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size b4e041e5-d895-47d3-b528-2cfedaffd8dd 00:19:21.694 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=b4e041e5-d895-47d3-b528-2cfedaffd8dd 00:19:21.694 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:21.694 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:21.694 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:21.694 19:21:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b4e041e5-d895-47d3-b528-2cfedaffd8dd 00:19:21.955 19:21:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:21.955 { 00:19:21.955 "name": "b4e041e5-d895-47d3-b528-2cfedaffd8dd", 00:19:21.955 "aliases": [ 00:19:21.955 "lvs/nvme0n1p0" 00:19:21.955 ], 00:19:21.955 "product_name": "Logical Volume", 00:19:21.955 "block_size": 4096, 00:19:21.955 "num_blocks": 26476544, 00:19:21.955 "uuid": "b4e041e5-d895-47d3-b528-2cfedaffd8dd", 00:19:21.955 "assigned_rate_limits": { 00:19:21.955 "rw_ios_per_sec": 0, 00:19:21.955 "rw_mbytes_per_sec": 0, 00:19:21.955 "r_mbytes_per_sec": 0, 00:19:21.955 "w_mbytes_per_sec": 0 00:19:21.955 }, 00:19:21.955 "claimed": false, 00:19:21.955 "zoned": false, 00:19:21.955 "supported_io_types": { 00:19:21.955 "read": true, 00:19:21.955 "write": true, 00:19:21.955 "unmap": true, 00:19:21.955 "flush": false, 00:19:21.955 "reset": true, 00:19:21.955 "nvme_admin": false, 00:19:21.955 "nvme_io": false, 00:19:21.955 "nvme_io_md": false, 00:19:21.955 "write_zeroes": true, 00:19:21.955 "zcopy": false, 00:19:21.955 "get_zone_info": false, 00:19:21.955 "zone_management": false, 00:19:21.955 "zone_append": false, 00:19:21.955 "compare": false, 00:19:21.955 "compare_and_write": false, 00:19:21.955 "abort": false, 00:19:21.955 "seek_hole": true, 00:19:21.955 "seek_data": true, 00:19:21.955 "copy": false, 00:19:21.955 "nvme_iov_md": false 00:19:21.955 }, 00:19:21.955 "driver_specific": { 00:19:21.955 "lvol": { 00:19:21.955 "lvol_store_uuid": "c787b26c-0942-49e8-838b-7f2a58cf62f5", 00:19:21.955 "base_bdev": "nvme0n1", 00:19:21.955 "thin_provision": true, 00:19:21.955 "num_allocated_clusters": 0, 00:19:21.955 "snapshot": false, 00:19:21.955 "clone": false, 00:19:21.955 "esnap_clone": false 00:19:21.955 } 00:19:21.955 } 00:19:21.955 } 00:19:21.955 ]' 00:19:21.955 19:21:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:21.955 19:21:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:21.955 19:21:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:21.955 19:21:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:21.955 19:21:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:21.955 19:21:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:21.955 19:21:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:21.955 19:21:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b4e041e5-d895-47d3-b528-2cfedaffd8dd -c nvc0n1p0 --l2p_dram_limit 20 00:19:22.215 [2024-12-16 19:21:06.418464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.215 [2024-12-16 19:21:06.418506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:22.215 [2024-12-16 19:21:06.418517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:22.215 [2024-12-16 19:21:06.418525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.216 [2024-12-16 19:21:06.418570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.216 [2024-12-16 19:21:06.418580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.216 [2024-12-16 19:21:06.418586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:22.216 [2024-12-16 19:21:06.418594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.216 [2024-12-16 19:21:06.418607] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:22.216 [2024-12-16 19:21:06.419283] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:22.216 [2024-12-16 19:21:06.419310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.216 [2024-12-16 19:21:06.419318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.216 [2024-12-16 19:21:06.419325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:19:22.216 [2024-12-16 19:21:06.419332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.216 [2024-12-16 19:21:06.419421] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d2a422a7-153d-417d-827d-ef823b605526 00:19:22.216 [2024-12-16 19:21:06.420367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.216 [2024-12-16 19:21:06.420391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:22.216 [2024-12-16 19:21:06.420404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:22.216 [2024-12-16 19:21:06.420409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.216 [2024-12-16 19:21:06.425111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.216 [2024-12-16 19:21:06.425135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.216 [2024-12-16 19:21:06.425144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.668 ms 00:19:22.216 [2024-12-16 19:21:06.425152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.216 [2024-12-16 19:21:06.425233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.216 [2024-12-16 19:21:06.425255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.216 [2024-12-16 19:21:06.425265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:22.216 [2024-12-16 19:21:06.425271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.216 [2024-12-16 19:21:06.425323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.216 [2024-12-16 19:21:06.425331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:22.216 [2024-12-16 19:21:06.425339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:22.216 [2024-12-16 19:21:06.425344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.216 [2024-12-16 19:21:06.425362] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:22.216 [2024-12-16 19:21:06.428250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.216 [2024-12-16 19:21:06.428275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.216 [2024-12-16 19:21:06.428282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.895 ms 00:19:22.216 [2024-12-16 19:21:06.428292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.216 [2024-12-16 19:21:06.428317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.216 [2024-12-16 19:21:06.428325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:22.216 [2024-12-16 19:21:06.428332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:22.216 [2024-12-16 19:21:06.428339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.216 [2024-12-16 19:21:06.428349] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:22.216 [2024-12-16 19:21:06.428459] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:22.216 [2024-12-16 19:21:06.428471] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:22.216 [2024-12-16 19:21:06.428481] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:22.216 [2024-12-16 19:21:06.428489] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:22.216 [2024-12-16 19:21:06.428497] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:22.216 [2024-12-16 19:21:06.428503] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:22.216 [2024-12-16 19:21:06.428509] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:22.216 [2024-12-16 19:21:06.428514] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:22.216 [2024-12-16 19:21:06.428522] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:22.216 [2024-12-16 19:21:06.428529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.216 [2024-12-16 19:21:06.428536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:22.216 [2024-12-16 19:21:06.428542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:19:22.216 [2024-12-16 19:21:06.428549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.216 [2024-12-16 19:21:06.428614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.216 [2024-12-16 19:21:06.428624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:22.216 [2024-12-16 19:21:06.428630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:22.216 [2024-12-16 19:21:06.428638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.216 [2024-12-16 19:21:06.428706] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:22.216 [2024-12-16 19:21:06.428716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:22.216 [2024-12-16 19:21:06.428722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.216 [2024-12-16 19:21:06.428729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.216 [2024-12-16 19:21:06.428735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:22.216 [2024-12-16 19:21:06.428742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:22.216 [2024-12-16 19:21:06.428747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:22.216 [2024-12-16 19:21:06.428753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:22.216 [2024-12-16 19:21:06.428758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:22.216 [2024-12-16 19:21:06.428764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.216 [2024-12-16 19:21:06.428769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:22.216 [2024-12-16 19:21:06.428781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:22.216 [2024-12-16 19:21:06.428786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.216 [2024-12-16 19:21:06.428792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:22.216 [2024-12-16 19:21:06.428798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:22.216 [2024-12-16 19:21:06.428805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.216 [2024-12-16 19:21:06.428810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:22.216 [2024-12-16 19:21:06.428817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:22.216 [2024-12-16 19:21:06.428821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.216 [2024-12-16 19:21:06.428828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:22.216 [2024-12-16 19:21:06.428833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:22.216 [2024-12-16 19:21:06.428839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.216 [2024-12-16 19:21:06.428844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:22.216 [2024-12-16 19:21:06.428850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:22.216 [2024-12-16 19:21:06.428855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.216 [2024-12-16 19:21:06.428860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:22.216 [2024-12-16 19:21:06.428865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:22.216 [2024-12-16 19:21:06.428872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.216 [2024-12-16 19:21:06.428876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:22.216 [2024-12-16 19:21:06.428883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:22.216 [2024-12-16 19:21:06.428888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.216 [2024-12-16 19:21:06.428895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:22.216 [2024-12-16 19:21:06.428900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:22.216 [2024-12-16 19:21:06.428907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.216 [2024-12-16 19:21:06.428913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:22.216 [2024-12-16 19:21:06.428919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:22.216 [2024-12-16 19:21:06.428924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.216 [2024-12-16 19:21:06.428932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:22.216 [2024-12-16 19:21:06.428937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:22.216 [2024-12-16 19:21:06.428943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.216 [2024-12-16 19:21:06.428948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:22.216 [2024-12-16 19:21:06.428955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:22.216 [2024-12-16 19:21:06.428959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.216 [2024-12-16 19:21:06.428965] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:22.216 [2024-12-16 19:21:06.428971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:22.216 [2024-12-16 19:21:06.428978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.216 [2024-12-16 19:21:06.428983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.216 [2024-12-16 19:21:06.428992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:22.216 [2024-12-16 19:21:06.428997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:22.216 [2024-12-16 19:21:06.429003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:22.216 [2024-12-16 19:21:06.429008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:22.216 [2024-12-16 19:21:06.429014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:22.216 [2024-12-16 19:21:06.429019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:22.217 [2024-12-16 19:21:06.429026] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:22.217 [2024-12-16 19:21:06.429033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.217 [2024-12-16 19:21:06.429041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:22.217 [2024-12-16 19:21:06.429046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:22.217 [2024-12-16 19:21:06.429053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:22.217 [2024-12-16 19:21:06.429058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:22.217 [2024-12-16 19:21:06.429064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:22.217 [2024-12-16 19:21:06.429070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:22.217 [2024-12-16 19:21:06.429077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:22.217 [2024-12-16 19:21:06.429083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:22.217 [2024-12-16 19:21:06.429092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:22.217 [2024-12-16 19:21:06.429097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:22.217 [2024-12-16 19:21:06.429104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:22.217 [2024-12-16 19:21:06.429110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:22.217 [2024-12-16 19:21:06.429117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:22.217 [2024-12-16 19:21:06.429122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:22.217 [2024-12-16 19:21:06.429129] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:22.217 [2024-12-16 19:21:06.429136] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.217 [2024-12-16 19:21:06.429144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:22.217 [2024-12-16 19:21:06.429150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:22.217 [2024-12-16 19:21:06.429156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:22.217 [2024-12-16 19:21:06.429162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:22.217 [2024-12-16 19:21:06.429168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.217 [2024-12-16 19:21:06.429184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:22.217 [2024-12-16 19:21:06.429192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:19:22.217 [2024-12-16 19:21:06.429198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.217 [2024-12-16 19:21:06.429247] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:22.217 [2024-12-16 19:21:06.429255] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:25.514 [2024-12-16 19:21:09.450802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.514 [2024-12-16 19:21:09.450901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:25.514 [2024-12-16 19:21:09.450923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3021.530 ms 00:19:25.514 [2024-12-16 19:21:09.450933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.514 [2024-12-16 19:21:09.483673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.514 [2024-12-16 19:21:09.483743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:25.514 [2024-12-16 19:21:09.483760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.460 ms 00:19:25.514 [2024-12-16 19:21:09.483769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.514 [2024-12-16 19:21:09.483919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.514 [2024-12-16 19:21:09.483931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:25.514 [2024-12-16 19:21:09.483947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:25.514 [2024-12-16 19:21:09.483955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.514 [2024-12-16 19:21:09.535953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.514 [2024-12-16 19:21:09.536015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:25.514 [2024-12-16 19:21:09.536032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.960 ms 00:19:25.514 [2024-12-16 19:21:09.536042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.514 [2024-12-16 19:21:09.536093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.514 [2024-12-16 19:21:09.536102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:25.514 [2024-12-16 19:21:09.536113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:25.514 [2024-12-16 19:21:09.536124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.514 [2024-12-16 19:21:09.536787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.514 [2024-12-16 19:21:09.536821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:25.514 [2024-12-16 19:21:09.536834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:19:25.514 [2024-12-16 19:21:09.536843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.514 [2024-12-16 19:21:09.536969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.514 [2024-12-16 19:21:09.536979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:25.514 [2024-12-16 19:21:09.536992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:19:25.514 [2024-12-16 19:21:09.537000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.514 [2024-12-16 19:21:09.553314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.514 [2024-12-16 19:21:09.553363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:25.514 [2024-12-16 19:21:09.553378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.291 ms 00:19:25.514 [2024-12-16 19:21:09.553397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.515 [2024-12-16 19:21:09.567191] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:25.515 [2024-12-16 19:21:09.575415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.515 [2024-12-16 19:21:09.575470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:25.515 [2024-12-16 19:21:09.575482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.931 ms 00:19:25.515 [2024-12-16 19:21:09.575493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.515 [2024-12-16 19:21:09.678937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.515 [2024-12-16 19:21:09.679212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:25.515 [2024-12-16 19:21:09.679240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.411 ms 00:19:25.515 [2024-12-16 19:21:09.679252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.515 [2024-12-16 19:21:09.679456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.515 [2024-12-16 19:21:09.679475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:25.515 [2024-12-16 19:21:09.679485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:19:25.515 [2024-12-16 19:21:09.679499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.515 [2024-12-16 19:21:09.706412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.515 [2024-12-16 19:21:09.706629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:25.515 [2024-12-16 19:21:09.706652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.855 ms 00:19:25.515 [2024-12-16 19:21:09.706664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.515 [2024-12-16 19:21:09.732375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.515 [2024-12-16 19:21:09.732435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:25.515 [2024-12-16 19:21:09.732449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.639 ms 00:19:25.515 [2024-12-16 19:21:09.732458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.515 [2024-12-16 19:21:09.733074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.515 [2024-12-16 19:21:09.733095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:25.515 [2024-12-16 19:21:09.733106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:19:25.515 [2024-12-16 19:21:09.733116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.515 [2024-12-16 19:21:09.821631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.515 [2024-12-16 19:21:09.821866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:25.515 [2024-12-16 19:21:09.821892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.472 ms 00:19:25.515 [2024-12-16 19:21:09.821905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.515 [2024-12-16 19:21:09.850634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.515 [2024-12-16 19:21:09.850696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:25.515 [2024-12-16 19:21:09.850713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.601 ms 00:19:25.515 [2024-12-16 19:21:09.850724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.776 [2024-12-16 19:21:09.877679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.776 [2024-12-16 19:21:09.877739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:25.776 [2024-12-16 19:21:09.877753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.903 ms 00:19:25.776 [2024-12-16 19:21:09.877763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.776 [2024-12-16 19:21:09.904833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.776 [2024-12-16 19:21:09.905040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:25.776 [2024-12-16 19:21:09.905063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.019 ms 00:19:25.776 [2024-12-16 19:21:09.905073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.776 [2024-12-16 19:21:09.905158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.776 [2024-12-16 19:21:09.905200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:25.776 [2024-12-16 19:21:09.905211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:25.776 [2024-12-16 19:21:09.905222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.776 [2024-12-16 19:21:09.905336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.776 [2024-12-16 19:21:09.905350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:25.776 [2024-12-16 19:21:09.905359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:25.776 [2024-12-16 19:21:09.905370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.776 [2024-12-16 19:21:09.906559] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3487.544 ms, result 0 00:19:25.776 { 00:19:25.776 "name": "ftl0", 00:19:25.776 "uuid": "d2a422a7-153d-417d-827d-ef823b605526" 00:19:25.776 } 00:19:25.776 19:21:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:25.776 19:21:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:25.776 19:21:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:26.038 19:21:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:26.038 [2024-12-16 19:21:10.250726] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:26.038 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:26.038 Zero copy mechanism will not be used. 00:19:26.038 Running I/O for 4 seconds... 00:19:27.924 1083.00 IOPS, 71.92 MiB/s [2024-12-16T19:21:13.664Z] 1485.00 IOPS, 98.61 MiB/s [2024-12-16T19:21:14.607Z] 1999.67 IOPS, 132.79 MiB/s [2024-12-16T19:21:14.608Z] 1837.75 IOPS, 122.04 MiB/s 00:19:30.254 Latency(us) 00:19:30.254 [2024-12-16T19:21:14.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.254 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:30.254 ftl0 : 4.00 1837.17 122.00 0.00 0.00 568.50 149.66 3049.94 00:19:30.254 [2024-12-16T19:21:14.608Z] =================================================================================================================== 00:19:30.254 [2024-12-16T19:21:14.608Z] Total : 1837.17 122.00 0.00 0.00 568.50 149.66 3049.94 00:19:30.254 [2024-12-16 19:21:14.261700] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:30.254 { 00:19:30.254 "results": [ 00:19:30.254 { 00:19:30.254 "job": "ftl0", 00:19:30.254 "core_mask": "0x1", 00:19:30.254 "workload": "randwrite", 00:19:30.254 "status": "finished", 00:19:30.254 "queue_depth": 1, 00:19:30.254 "io_size": 69632, 00:19:30.254 "runtime": 4.001801, 00:19:30.254 "iops": 1837.1728129409732, 00:19:30.254 "mibps": 121.99975710936151, 00:19:30.254 "io_failed": 0, 00:19:30.254 "io_timeout": 0, 00:19:30.254 "avg_latency_us": 568.5009592366285, 00:19:30.254 "min_latency_us": 149.66153846153847, 00:19:30.254 "max_latency_us": 3049.944615384615 00:19:30.254 } 00:19:30.254 ], 00:19:30.254 "core_count": 1 00:19:30.254 } 00:19:30.254 19:21:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:30.254 [2024-12-16 19:21:14.376961] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:30.254 Running I/O for 4 seconds... 00:19:32.190 5665.00 IOPS, 22.13 MiB/s [2024-12-16T19:21:17.488Z] 5547.00 IOPS, 21.67 MiB/s [2024-12-16T19:21:18.430Z] 5536.67 IOPS, 21.63 MiB/s [2024-12-16T19:21:18.430Z] 5618.00 IOPS, 21.95 MiB/s 00:19:34.076 Latency(us) 00:19:34.076 [2024-12-16T19:21:18.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.076 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:34.076 ftl0 : 4.04 5597.78 21.87 0.00 0.00 22748.49 329.26 49605.71 00:19:34.076 [2024-12-16T19:21:18.430Z] =================================================================================================================== 00:19:34.076 [2024-12-16T19:21:18.430Z] Total : 5597.78 21.87 0.00 0.00 22748.49 0.00 49605.71 00:19:34.076 [2024-12-16 19:21:18.423812] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:34.076 { 00:19:34.076 "results": [ 00:19:34.076 { 00:19:34.076 "job": "ftl0", 00:19:34.076 "core_mask": "0x1", 00:19:34.076 "workload": "randwrite", 00:19:34.076 "status": "finished", 00:19:34.076 "queue_depth": 128, 00:19:34.076 "io_size": 4096, 00:19:34.076 "runtime": 4.036421, 00:19:34.076 "iops": 5597.780806313316, 00:19:34.076 "mibps": 21.86633127466139, 00:19:34.076 "io_failed": 0, 00:19:34.076 "io_timeout": 0, 00:19:34.076 "avg_latency_us": 22748.4878884709, 00:19:34.076 "min_latency_us": 329.2553846153846, 00:19:34.076 "max_latency_us": 49605.71076923077 00:19:34.076 } 00:19:34.076 ], 00:19:34.076 "core_count": 1 00:19:34.076 } 00:19:34.337 19:21:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:34.337 [2024-12-16 19:21:18.542809] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:34.337 Running I/O for 4 seconds... 00:19:36.303 4759.00 IOPS, 18.59 MiB/s [2024-12-16T19:21:21.600Z] 5187.00 IOPS, 20.26 MiB/s [2024-12-16T19:21:22.985Z] 5118.00 IOPS, 19.99 MiB/s [2024-12-16T19:21:22.985Z] 4907.00 IOPS, 19.17 MiB/s 00:19:38.631 Latency(us) 00:19:38.631 [2024-12-16T19:21:22.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.631 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:38.631 Verification LBA range: start 0x0 length 0x1400000 00:19:38.631 ftl0 : 4.01 4926.00 19.24 0.00 0.00 25920.43 299.32 39926.55 00:19:38.631 [2024-12-16T19:21:22.986Z] =================================================================================================================== 00:19:38.632 [2024-12-16T19:21:22.986Z] Total : 4926.00 19.24 0.00 0.00 25920.43 0.00 39926.55 00:19:38.632 { 00:19:38.632 "results": [ 00:19:38.632 { 00:19:38.632 "job": "ftl0", 00:19:38.632 "core_mask": "0x1", 00:19:38.632 "workload": "verify", 00:19:38.632 "status": "finished", 00:19:38.632 "verify_range": { 00:19:38.632 "start": 0, 00:19:38.632 "length": 20971520 00:19:38.632 }, 00:19:38.632 "queue_depth": 128, 00:19:38.632 "io_size": 4096, 00:19:38.632 "runtime": 4.009138, 00:19:38.632 "iops": 4925.996560856723, 00:19:38.632 "mibps": 19.242174065846573, 00:19:38.632 "io_failed": 0, 00:19:38.632 "io_timeout": 0, 00:19:38.632 "avg_latency_us": 25920.42819726023, 00:19:38.632 "min_latency_us": 299.32307692307694, 00:19:38.632 "max_latency_us": 39926.54769230769 00:19:38.632 } 00:19:38.632 ], 00:19:38.632 "core_count": 1 00:19:38.632 } 00:19:38.632 [2024-12-16 19:21:22.570253] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:38.632 19:21:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:38.632 [2024-12-16 19:21:22.774819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.632 [2024-12-16 19:21:22.774858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:38.632 [2024-12-16 19:21:22.774867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:38.632 [2024-12-16 19:21:22.774875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.632 [2024-12-16 19:21:22.774890] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:38.632 [2024-12-16 19:21:22.776891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.632 [2024-12-16 19:21:22.776913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:38.632 [2024-12-16 19:21:22.776922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.988 ms 00:19:38.632 [2024-12-16 19:21:22.776929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.632 [2024-12-16 19:21:22.778541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.632 [2024-12-16 19:21:22.778567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:38.632 [2024-12-16 19:21:22.778575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.596 ms 00:19:38.632 [2024-12-16 19:21:22.778585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.632 [2024-12-16 19:21:22.915921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.632 [2024-12-16 19:21:22.915951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:38.632 [2024-12-16 19:21:22.915965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 137.319 ms 00:19:38.632 [2024-12-16 19:21:22.915971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.632 [2024-12-16 19:21:22.920770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.632 [2024-12-16 19:21:22.920794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:38.632 [2024-12-16 19:21:22.920804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.772 ms 00:19:38.632 [2024-12-16 19:21:22.920813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.632 [2024-12-16 19:21:22.938527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.632 [2024-12-16 19:21:22.938555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:38.632 [2024-12-16 19:21:22.938566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.669 ms 00:19:38.632 [2024-12-16 19:21:22.938572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.632 [2024-12-16 19:21:22.950542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.632 [2024-12-16 19:21:22.950573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:38.632 [2024-12-16 19:21:22.950583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.943 ms 00:19:38.632 [2024-12-16 19:21:22.950590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.632 [2024-12-16 19:21:22.950687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.632 [2024-12-16 19:21:22.950695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:38.632 [2024-12-16 19:21:22.950705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:38.632 [2024-12-16 19:21:22.950711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.632 [2024-12-16 19:21:22.968545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.632 [2024-12-16 19:21:22.968569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:38.632 [2024-12-16 19:21:22.968579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.821 ms 00:19:38.632 [2024-12-16 19:21:22.968585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.894 [2024-12-16 19:21:22.985775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.894 [2024-12-16 19:21:22.985890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:38.894 [2024-12-16 19:21:22.985906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.162 ms 00:19:38.894 [2024-12-16 19:21:22.985912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.894 [2024-12-16 19:21:23.002948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.894 [2024-12-16 19:21:23.002973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:38.894 [2024-12-16 19:21:23.002982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.010 ms 00:19:38.894 [2024-12-16 19:21:23.002988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.894 [2024-12-16 19:21:23.019898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.894 [2024-12-16 19:21:23.019922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:38.894 [2024-12-16 19:21:23.019932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.858 ms 00:19:38.894 [2024-12-16 19:21:23.019937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.894 [2024-12-16 19:21:23.019964] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:38.894 [2024-12-16 19:21:23.019974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.019983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.019988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.019996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:38.894 [2024-12-16 19:21:23.020249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:38.895 [2024-12-16 19:21:23.020670] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:38.895 [2024-12-16 19:21:23.020676] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d2a422a7-153d-417d-827d-ef823b605526 00:19:38.895 [2024-12-16 19:21:23.020684] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:38.895 [2024-12-16 19:21:23.020690] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:38.895 [2024-12-16 19:21:23.020695] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:38.895 [2024-12-16 19:21:23.020702] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:38.895 [2024-12-16 19:21:23.020707] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:38.895 [2024-12-16 19:21:23.020714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:38.895 [2024-12-16 19:21:23.020723] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:38.895 [2024-12-16 19:21:23.020730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:38.895 [2024-12-16 19:21:23.020735] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:38.895 [2024-12-16 19:21:23.020741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.895 [2024-12-16 19:21:23.020747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:38.895 [2024-12-16 19:21:23.020755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:19:38.895 [2024-12-16 19:21:23.020760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.895 [2024-12-16 19:21:23.030245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.895 [2024-12-16 19:21:23.030268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:38.895 [2024-12-16 19:21:23.030278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.460 ms 00:19:38.895 [2024-12-16 19:21:23.030284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.895 [2024-12-16 19:21:23.030557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.895 [2024-12-16 19:21:23.030565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:38.895 [2024-12-16 19:21:23.030572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:19:38.895 [2024-12-16 19:21:23.030578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.895 [2024-12-16 19:21:23.057735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.895 [2024-12-16 19:21:23.057761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:38.895 [2024-12-16 19:21:23.057772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.895 [2024-12-16 19:21:23.057779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.895 [2024-12-16 19:21:23.057822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.895 [2024-12-16 19:21:23.057828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:38.895 [2024-12-16 19:21:23.057835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.895 [2024-12-16 19:21:23.057841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.895 [2024-12-16 19:21:23.057895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.895 [2024-12-16 19:21:23.057902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:38.895 [2024-12-16 19:21:23.057910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.895 [2024-12-16 19:21:23.057915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.895 [2024-12-16 19:21:23.057927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.895 [2024-12-16 19:21:23.057933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:38.896 [2024-12-16 19:21:23.057940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.896 [2024-12-16 19:21:23.057946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.896 [2024-12-16 19:21:23.116542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.896 [2024-12-16 19:21:23.116572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:38.896 [2024-12-16 19:21:23.116584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.896 [2024-12-16 19:21:23.116590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.896 [2024-12-16 19:21:23.164678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.896 [2024-12-16 19:21:23.164708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:38.896 [2024-12-16 19:21:23.164717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.896 [2024-12-16 19:21:23.164723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.896 [2024-12-16 19:21:23.164793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.896 [2024-12-16 19:21:23.164802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:38.896 [2024-12-16 19:21:23.164809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.896 [2024-12-16 19:21:23.164815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.896 [2024-12-16 19:21:23.164846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.896 [2024-12-16 19:21:23.164853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:38.896 [2024-12-16 19:21:23.164861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.896 [2024-12-16 19:21:23.164866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.896 [2024-12-16 19:21:23.164935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.896 [2024-12-16 19:21:23.164944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:38.896 [2024-12-16 19:21:23.164953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.896 [2024-12-16 19:21:23.164959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.896 [2024-12-16 19:21:23.164983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.896 [2024-12-16 19:21:23.164989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:38.896 [2024-12-16 19:21:23.164996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.896 [2024-12-16 19:21:23.165002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.896 [2024-12-16 19:21:23.165027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.896 [2024-12-16 19:21:23.165035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:38.896 [2024-12-16 19:21:23.165042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.896 [2024-12-16 19:21:23.165053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.896 [2024-12-16 19:21:23.165085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.896 [2024-12-16 19:21:23.165092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:38.896 [2024-12-16 19:21:23.165100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.896 [2024-12-16 19:21:23.165105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.896 [2024-12-16 19:21:23.165208] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 390.348 ms, result 0 00:19:38.896 true 00:19:38.896 19:21:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77761 00:19:38.896 19:21:23 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77761 ']' 00:19:38.896 19:21:23 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77761 00:19:38.896 19:21:23 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:38.896 19:21:23 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.896 19:21:23 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77761 00:19:38.896 killing process with pid 77761 00:19:38.896 Received shutdown signal, test time was about 4.000000 seconds 00:19:38.896 00:19:38.896 Latency(us) 00:19:38.896 [2024-12-16T19:21:23.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.896 [2024-12-16T19:21:23.250Z] =================================================================================================================== 00:19:38.896 [2024-12-16T19:21:23.250Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:38.896 19:21:23 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:38.896 19:21:23 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:38.896 19:21:23 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77761' 00:19:38.896 19:21:23 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77761 00:19:38.896 19:21:23 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77761 00:19:44.187 Remove shared memory files 00:19:44.187 19:21:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:44.187 19:21:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:44.187 19:21:27 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:44.187 19:21:27 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:44.187 19:21:27 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:44.187 19:21:27 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:44.187 19:21:27 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:44.187 19:21:27 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:44.187 ************************************ 00:19:44.187 END TEST ftl_bdevperf 00:19:44.187 ************************************ 00:19:44.187 00:19:44.187 real 0m25.517s 00:19:44.187 user 0m28.000s 00:19:44.187 sys 0m1.102s 00:19:44.187 19:21:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.187 19:21:27 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:44.187 19:21:27 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:44.187 19:21:27 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:44.187 19:21:27 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.187 19:21:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:44.187 ************************************ 00:19:44.187 START TEST ftl_trim 00:19:44.187 ************************************ 00:19:44.187 19:21:27 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:44.187 * Looking for test storage... 00:19:44.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.187 19:21:28 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:44.187 19:21:28 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:19:44.187 19:21:28 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:44.187 19:21:28 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:44.187 19:21:28 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:44.187 19:21:28 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:44.187 19:21:28 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:44.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.187 --rc genhtml_branch_coverage=1 00:19:44.187 --rc genhtml_function_coverage=1 00:19:44.187 --rc genhtml_legend=1 00:19:44.187 --rc geninfo_all_blocks=1 00:19:44.187 --rc geninfo_unexecuted_blocks=1 00:19:44.187 00:19:44.187 ' 00:19:44.187 19:21:28 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:44.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.187 --rc genhtml_branch_coverage=1 00:19:44.187 --rc genhtml_function_coverage=1 00:19:44.187 --rc genhtml_legend=1 00:19:44.187 --rc geninfo_all_blocks=1 00:19:44.187 --rc geninfo_unexecuted_blocks=1 00:19:44.187 00:19:44.187 ' 00:19:44.187 19:21:28 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:44.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.187 --rc genhtml_branch_coverage=1 00:19:44.187 --rc genhtml_function_coverage=1 00:19:44.187 --rc genhtml_legend=1 00:19:44.187 --rc geninfo_all_blocks=1 00:19:44.187 --rc geninfo_unexecuted_blocks=1 00:19:44.187 00:19:44.187 ' 00:19:44.187 19:21:28 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:44.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.187 --rc genhtml_branch_coverage=1 00:19:44.187 --rc genhtml_function_coverage=1 00:19:44.187 --rc genhtml_legend=1 00:19:44.187 --rc geninfo_all_blocks=1 00:19:44.187 --rc geninfo_unexecuted_blocks=1 00:19:44.187 00:19:44.187 ' 00:19:44.187 19:21:28 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:44.187 19:21:28 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:44.187 19:21:28 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.187 19:21:28 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.187 19:21:28 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:44.187 19:21:28 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:44.187 19:21:28 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:44.187 19:21:28 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:44.187 19:21:28 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:44.187 19:21:28 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.187 19:21:28 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.187 19:21:28 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:44.187 19:21:28 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:44.187 19:21:28 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:44.187 19:21:28 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:44.187 19:21:28 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78113 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78113 00:19:44.188 19:21:28 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78113 ']' 00:19:44.188 19:21:28 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:44.188 19:21:28 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.188 19:21:28 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.188 19:21:28 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.188 19:21:28 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.188 19:21:28 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:44.188 [2024-12-16 19:21:28.211866] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:44.188 [2024-12-16 19:21:28.212256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78113 ] 00:19:44.188 [2024-12-16 19:21:28.378661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:44.188 [2024-12-16 19:21:28.504666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.188 [2024-12-16 19:21:28.504952] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.188 [2024-12-16 19:21:28.505041] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.128 19:21:29 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.128 19:21:29 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:45.128 19:21:29 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:45.128 19:21:29 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:45.128 19:21:29 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:45.128 19:21:29 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:45.128 19:21:29 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:45.128 19:21:29 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:45.390 19:21:29 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:45.390 19:21:29 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:45.390 19:21:29 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:45.390 19:21:29 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:45.390 19:21:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:45.390 19:21:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:45.390 19:21:29 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:45.390 19:21:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:45.651 19:21:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:45.651 { 00:19:45.651 "name": "nvme0n1", 00:19:45.651 "aliases": [ 00:19:45.651 "d242f871-6bb7-4319-be7d-314b13fda414" 00:19:45.651 ], 00:19:45.651 "product_name": "NVMe disk", 00:19:45.651 "block_size": 4096, 00:19:45.651 "num_blocks": 1310720, 00:19:45.651 "uuid": "d242f871-6bb7-4319-be7d-314b13fda414", 00:19:45.651 "numa_id": -1, 00:19:45.651 "assigned_rate_limits": { 00:19:45.651 "rw_ios_per_sec": 0, 00:19:45.651 "rw_mbytes_per_sec": 0, 00:19:45.651 "r_mbytes_per_sec": 0, 00:19:45.651 "w_mbytes_per_sec": 0 00:19:45.651 }, 00:19:45.651 "claimed": true, 00:19:45.651 "claim_type": "read_many_write_one", 00:19:45.651 "zoned": false, 00:19:45.651 "supported_io_types": { 00:19:45.651 "read": true, 00:19:45.651 "write": true, 00:19:45.651 "unmap": true, 00:19:45.651 "flush": true, 00:19:45.651 "reset": true, 00:19:45.651 "nvme_admin": true, 00:19:45.651 "nvme_io": true, 00:19:45.651 "nvme_io_md": false, 00:19:45.651 "write_zeroes": true, 00:19:45.651 "zcopy": false, 00:19:45.651 "get_zone_info": false, 00:19:45.651 "zone_management": false, 00:19:45.651 "zone_append": false, 00:19:45.651 "compare": true, 00:19:45.651 "compare_and_write": false, 00:19:45.651 "abort": true, 00:19:45.651 "seek_hole": false, 00:19:45.651 "seek_data": false, 00:19:45.651 "copy": true, 00:19:45.651 "nvme_iov_md": false 00:19:45.651 }, 00:19:45.651 "driver_specific": { 00:19:45.652 "nvme": [ 00:19:45.652 { 00:19:45.652 "pci_address": "0000:00:11.0", 00:19:45.652 "trid": { 00:19:45.652 "trtype": "PCIe", 00:19:45.652 "traddr": "0000:00:11.0" 00:19:45.652 }, 00:19:45.652 "ctrlr_data": { 00:19:45.652 "cntlid": 0, 00:19:45.652 "vendor_id": "0x1b36", 00:19:45.652 "model_number": "QEMU NVMe Ctrl", 00:19:45.652 "serial_number": "12341", 00:19:45.652 "firmware_revision": "8.0.0", 00:19:45.652 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:45.652 "oacs": { 00:19:45.652 "security": 0, 00:19:45.652 "format": 1, 00:19:45.652 "firmware": 0, 00:19:45.652 "ns_manage": 1 00:19:45.652 }, 00:19:45.652 "multi_ctrlr": false, 00:19:45.652 "ana_reporting": false 00:19:45.652 }, 00:19:45.652 "vs": { 00:19:45.652 "nvme_version": "1.4" 00:19:45.652 }, 00:19:45.652 "ns_data": { 00:19:45.652 "id": 1, 00:19:45.652 "can_share": false 00:19:45.652 } 00:19:45.652 } 00:19:45.652 ], 00:19:45.652 "mp_policy": "active_passive" 00:19:45.652 } 00:19:45.652 } 00:19:45.652 ]' 00:19:45.652 19:21:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:45.652 19:21:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:45.652 19:21:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:45.652 19:21:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:45.652 19:21:29 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:45.652 19:21:29 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:45.652 19:21:29 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:45.652 19:21:29 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:45.652 19:21:29 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:45.652 19:21:29 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:45.652 19:21:29 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:45.912 19:21:30 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=c787b26c-0942-49e8-838b-7f2a58cf62f5 00:19:45.912 19:21:30 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:45.912 19:21:30 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c787b26c-0942-49e8-838b-7f2a58cf62f5 00:19:46.174 19:21:30 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:46.174 19:21:30 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=73a17cda-3af7-440a-b00b-2d82ee4edb76 00:19:46.174 19:21:30 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 73a17cda-3af7-440a-b00b-2d82ee4edb76 00:19:46.435 19:21:30 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=adadfbb0-c75a-4443-83c5-a65b5eef6612 00:19:46.435 19:21:30 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 adadfbb0-c75a-4443-83c5-a65b5eef6612 00:19:46.435 19:21:30 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:46.435 19:21:30 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:46.435 19:21:30 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=adadfbb0-c75a-4443-83c5-a65b5eef6612 00:19:46.435 19:21:30 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:46.435 19:21:30 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size adadfbb0-c75a-4443-83c5-a65b5eef6612 00:19:46.435 19:21:30 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=adadfbb0-c75a-4443-83c5-a65b5eef6612 00:19:46.435 19:21:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:46.435 19:21:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:46.435 19:21:30 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:46.435 19:21:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b adadfbb0-c75a-4443-83c5-a65b5eef6612 00:19:46.696 19:21:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:46.696 { 00:19:46.696 "name": "adadfbb0-c75a-4443-83c5-a65b5eef6612", 00:19:46.696 "aliases": [ 00:19:46.696 "lvs/nvme0n1p0" 00:19:46.696 ], 00:19:46.696 "product_name": "Logical Volume", 00:19:46.696 "block_size": 4096, 00:19:46.696 "num_blocks": 26476544, 00:19:46.696 "uuid": "adadfbb0-c75a-4443-83c5-a65b5eef6612", 00:19:46.696 "assigned_rate_limits": { 00:19:46.696 "rw_ios_per_sec": 0, 00:19:46.696 "rw_mbytes_per_sec": 0, 00:19:46.696 "r_mbytes_per_sec": 0, 00:19:46.696 "w_mbytes_per_sec": 0 00:19:46.696 }, 00:19:46.697 "claimed": false, 00:19:46.697 "zoned": false, 00:19:46.697 "supported_io_types": { 00:19:46.697 "read": true, 00:19:46.697 "write": true, 00:19:46.697 "unmap": true, 00:19:46.697 "flush": false, 00:19:46.697 "reset": true, 00:19:46.697 "nvme_admin": false, 00:19:46.697 "nvme_io": false, 00:19:46.697 "nvme_io_md": false, 00:19:46.697 "write_zeroes": true, 00:19:46.697 "zcopy": false, 00:19:46.697 "get_zone_info": false, 00:19:46.697 "zone_management": false, 00:19:46.697 "zone_append": false, 00:19:46.697 "compare": false, 00:19:46.697 "compare_and_write": false, 00:19:46.697 "abort": false, 00:19:46.697 "seek_hole": true, 00:19:46.697 "seek_data": true, 00:19:46.697 "copy": false, 00:19:46.697 "nvme_iov_md": false 00:19:46.697 }, 00:19:46.697 "driver_specific": { 00:19:46.697 "lvol": { 00:19:46.697 "lvol_store_uuid": "73a17cda-3af7-440a-b00b-2d82ee4edb76", 00:19:46.697 "base_bdev": "nvme0n1", 00:19:46.697 "thin_provision": true, 00:19:46.697 "num_allocated_clusters": 0, 00:19:46.697 "snapshot": false, 00:19:46.697 "clone": false, 00:19:46.697 "esnap_clone": false 00:19:46.697 } 00:19:46.697 } 00:19:46.697 } 00:19:46.697 ]' 00:19:46.697 19:21:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:46.697 19:21:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:46.697 19:21:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:46.697 19:21:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:46.697 19:21:30 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:46.697 19:21:30 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:46.697 19:21:30 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:46.697 19:21:30 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:46.697 19:21:30 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:46.957 19:21:31 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:46.957 19:21:31 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:46.957 19:21:31 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size adadfbb0-c75a-4443-83c5-a65b5eef6612 00:19:46.957 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=adadfbb0-c75a-4443-83c5-a65b5eef6612 00:19:46.957 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:46.957 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:46.957 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:46.957 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b adadfbb0-c75a-4443-83c5-a65b5eef6612 00:19:47.218 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:47.218 { 00:19:47.218 "name": "adadfbb0-c75a-4443-83c5-a65b5eef6612", 00:19:47.218 "aliases": [ 00:19:47.218 "lvs/nvme0n1p0" 00:19:47.218 ], 00:19:47.218 "product_name": "Logical Volume", 00:19:47.218 "block_size": 4096, 00:19:47.218 "num_blocks": 26476544, 00:19:47.218 "uuid": "adadfbb0-c75a-4443-83c5-a65b5eef6612", 00:19:47.218 "assigned_rate_limits": { 00:19:47.218 "rw_ios_per_sec": 0, 00:19:47.218 "rw_mbytes_per_sec": 0, 00:19:47.218 "r_mbytes_per_sec": 0, 00:19:47.218 "w_mbytes_per_sec": 0 00:19:47.218 }, 00:19:47.218 "claimed": false, 00:19:47.218 "zoned": false, 00:19:47.218 "supported_io_types": { 00:19:47.218 "read": true, 00:19:47.218 "write": true, 00:19:47.218 "unmap": true, 00:19:47.218 "flush": false, 00:19:47.218 "reset": true, 00:19:47.218 "nvme_admin": false, 00:19:47.218 "nvme_io": false, 00:19:47.218 "nvme_io_md": false, 00:19:47.218 "write_zeroes": true, 00:19:47.218 "zcopy": false, 00:19:47.218 "get_zone_info": false, 00:19:47.218 "zone_management": false, 00:19:47.218 "zone_append": false, 00:19:47.218 "compare": false, 00:19:47.218 "compare_and_write": false, 00:19:47.218 "abort": false, 00:19:47.218 "seek_hole": true, 00:19:47.218 "seek_data": true, 00:19:47.218 "copy": false, 00:19:47.218 "nvme_iov_md": false 00:19:47.218 }, 00:19:47.218 "driver_specific": { 00:19:47.218 "lvol": { 00:19:47.218 "lvol_store_uuid": "73a17cda-3af7-440a-b00b-2d82ee4edb76", 00:19:47.218 "base_bdev": "nvme0n1", 00:19:47.218 "thin_provision": true, 00:19:47.218 "num_allocated_clusters": 0, 00:19:47.218 "snapshot": false, 00:19:47.218 "clone": false, 00:19:47.218 "esnap_clone": false 00:19:47.218 } 00:19:47.218 } 00:19:47.218 } 00:19:47.218 ]' 00:19:47.218 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:47.218 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:47.218 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:47.218 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:47.218 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:47.218 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:47.218 19:21:31 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:47.218 19:21:31 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:47.478 19:21:31 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:47.478 19:21:31 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:47.478 19:21:31 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size adadfbb0-c75a-4443-83c5-a65b5eef6612 00:19:47.478 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=adadfbb0-c75a-4443-83c5-a65b5eef6612 00:19:47.478 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:47.478 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:47.478 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:47.478 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b adadfbb0-c75a-4443-83c5-a65b5eef6612 00:19:47.739 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:47.739 { 00:19:47.739 "name": "adadfbb0-c75a-4443-83c5-a65b5eef6612", 00:19:47.739 "aliases": [ 00:19:47.739 "lvs/nvme0n1p0" 00:19:47.739 ], 00:19:47.739 "product_name": "Logical Volume", 00:19:47.739 "block_size": 4096, 00:19:47.739 "num_blocks": 26476544, 00:19:47.739 "uuid": "adadfbb0-c75a-4443-83c5-a65b5eef6612", 00:19:47.739 "assigned_rate_limits": { 00:19:47.739 "rw_ios_per_sec": 0, 00:19:47.739 "rw_mbytes_per_sec": 0, 00:19:47.739 "r_mbytes_per_sec": 0, 00:19:47.739 "w_mbytes_per_sec": 0 00:19:47.739 }, 00:19:47.739 "claimed": false, 00:19:47.739 "zoned": false, 00:19:47.739 "supported_io_types": { 00:19:47.739 "read": true, 00:19:47.739 "write": true, 00:19:47.739 "unmap": true, 00:19:47.739 "flush": false, 00:19:47.739 "reset": true, 00:19:47.739 "nvme_admin": false, 00:19:47.739 "nvme_io": false, 00:19:47.739 "nvme_io_md": false, 00:19:47.739 "write_zeroes": true, 00:19:47.739 "zcopy": false, 00:19:47.739 "get_zone_info": false, 00:19:47.739 "zone_management": false, 00:19:47.739 "zone_append": false, 00:19:47.739 "compare": false, 00:19:47.739 "compare_and_write": false, 00:19:47.739 "abort": false, 00:19:47.739 "seek_hole": true, 00:19:47.739 "seek_data": true, 00:19:47.739 "copy": false, 00:19:47.739 "nvme_iov_md": false 00:19:47.739 }, 00:19:47.739 "driver_specific": { 00:19:47.739 "lvol": { 00:19:47.739 "lvol_store_uuid": "73a17cda-3af7-440a-b00b-2d82ee4edb76", 00:19:47.739 "base_bdev": "nvme0n1", 00:19:47.739 "thin_provision": true, 00:19:47.739 "num_allocated_clusters": 0, 00:19:47.739 "snapshot": false, 00:19:47.739 "clone": false, 00:19:47.739 "esnap_clone": false 00:19:47.739 } 00:19:47.739 } 00:19:47.739 } 00:19:47.739 ]' 00:19:47.739 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:47.739 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:47.739 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:47.739 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:47.739 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:47.739 19:21:31 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:47.739 19:21:31 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:47.739 19:21:31 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d adadfbb0-c75a-4443-83c5-a65b5eef6612 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:48.001 [2024-12-16 19:21:32.134678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.001 [2024-12-16 19:21:32.134717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:48.001 [2024-12-16 19:21:32.134731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:48.001 [2024-12-16 19:21:32.134738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.001 [2024-12-16 19:21:32.136997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.001 [2024-12-16 19:21:32.137110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:48.001 [2024-12-16 19:21:32.137126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.233 ms 00:19:48.001 [2024-12-16 19:21:32.137133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.001 [2024-12-16 19:21:32.137263] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:48.001 [2024-12-16 19:21:32.137831] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:48.001 [2024-12-16 19:21:32.137853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.001 [2024-12-16 19:21:32.137859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:48.001 [2024-12-16 19:21:32.137867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:19:48.001 [2024-12-16 19:21:32.137874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.001 [2024-12-16 19:21:32.137962] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 36f840fd-a489-4625-9f63-997690818b8f 00:19:48.001 [2024-12-16 19:21:32.138982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.001 [2024-12-16 19:21:32.139012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:48.001 [2024-12-16 19:21:32.139019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:48.001 [2024-12-16 19:21:32.139027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.001 [2024-12-16 19:21:32.144268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.001 [2024-12-16 19:21:32.144292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:48.001 [2024-12-16 19:21:32.144302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.166 ms 00:19:48.001 [2024-12-16 19:21:32.144309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.001 [2024-12-16 19:21:32.144403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.001 [2024-12-16 19:21:32.144412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:48.001 [2024-12-16 19:21:32.144419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:48.001 [2024-12-16 19:21:32.144428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.001 [2024-12-16 19:21:32.144460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.001 [2024-12-16 19:21:32.144468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:48.001 [2024-12-16 19:21:32.144474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:48.001 [2024-12-16 19:21:32.144483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.001 [2024-12-16 19:21:32.144507] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:48.001 [2024-12-16 19:21:32.147413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.001 [2024-12-16 19:21:32.147516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:48.001 [2024-12-16 19:21:32.147533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.908 ms 00:19:48.001 [2024-12-16 19:21:32.147539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.001 [2024-12-16 19:21:32.147595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.001 [2024-12-16 19:21:32.147614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:48.001 [2024-12-16 19:21:32.147621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:48.001 [2024-12-16 19:21:32.147627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.001 [2024-12-16 19:21:32.147656] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:48.001 [2024-12-16 19:21:32.147765] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:48.001 [2024-12-16 19:21:32.147777] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:48.001 [2024-12-16 19:21:32.147786] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:48.001 [2024-12-16 19:21:32.147795] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:48.001 [2024-12-16 19:21:32.147801] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:48.001 [2024-12-16 19:21:32.147808] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:48.001 [2024-12-16 19:21:32.147814] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:48.001 [2024-12-16 19:21:32.147822] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:48.001 [2024-12-16 19:21:32.147829] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:48.001 [2024-12-16 19:21:32.147836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.001 [2024-12-16 19:21:32.147841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:48.001 [2024-12-16 19:21:32.147848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:19:48.001 [2024-12-16 19:21:32.147854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.001 [2024-12-16 19:21:32.147935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.001 [2024-12-16 19:21:32.147941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:48.001 [2024-12-16 19:21:32.147948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:48.001 [2024-12-16 19:21:32.147953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.002 [2024-12-16 19:21:32.148061] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:48.002 [2024-12-16 19:21:32.148069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:48.002 [2024-12-16 19:21:32.148077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:48.002 [2024-12-16 19:21:32.148083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.002 [2024-12-16 19:21:32.148090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:48.002 [2024-12-16 19:21:32.148095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:48.002 [2024-12-16 19:21:32.148101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:48.002 [2024-12-16 19:21:32.148106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:48.002 [2024-12-16 19:21:32.148113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:48.002 [2024-12-16 19:21:32.148118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:48.002 [2024-12-16 19:21:32.148126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:48.002 [2024-12-16 19:21:32.148131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:48.002 [2024-12-16 19:21:32.148139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:48.002 [2024-12-16 19:21:32.148144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:48.002 [2024-12-16 19:21:32.148150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:48.002 [2024-12-16 19:21:32.148156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.002 [2024-12-16 19:21:32.148164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:48.002 [2024-12-16 19:21:32.148169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:48.002 [2024-12-16 19:21:32.148190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.002 [2024-12-16 19:21:32.148196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:48.002 [2024-12-16 19:21:32.148202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:48.002 [2024-12-16 19:21:32.148207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.002 [2024-12-16 19:21:32.148213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:48.002 [2024-12-16 19:21:32.148220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:48.002 [2024-12-16 19:21:32.148227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.002 [2024-12-16 19:21:32.148232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:48.002 [2024-12-16 19:21:32.148238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:48.002 [2024-12-16 19:21:32.148243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.002 [2024-12-16 19:21:32.148250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:48.002 [2024-12-16 19:21:32.148255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:48.002 [2024-12-16 19:21:32.148261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.002 [2024-12-16 19:21:32.148267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:48.002 [2024-12-16 19:21:32.148274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:48.002 [2024-12-16 19:21:32.148279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:48.002 [2024-12-16 19:21:32.148285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:48.002 [2024-12-16 19:21:32.148290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:48.002 [2024-12-16 19:21:32.148297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:48.002 [2024-12-16 19:21:32.148302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:48.002 [2024-12-16 19:21:32.148309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:48.002 [2024-12-16 19:21:32.148314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.002 [2024-12-16 19:21:32.148321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:48.002 [2024-12-16 19:21:32.148326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:48.002 [2024-12-16 19:21:32.148332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.002 [2024-12-16 19:21:32.148337] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:48.002 [2024-12-16 19:21:32.148344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:48.002 [2024-12-16 19:21:32.148349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:48.002 [2024-12-16 19:21:32.148373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.002 [2024-12-16 19:21:32.148379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:48.002 [2024-12-16 19:21:32.148387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:48.002 [2024-12-16 19:21:32.148392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:48.002 [2024-12-16 19:21:32.148398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:48.002 [2024-12-16 19:21:32.148403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:48.002 [2024-12-16 19:21:32.148409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:48.002 [2024-12-16 19:21:32.148415] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:48.002 [2024-12-16 19:21:32.148424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:48.002 [2024-12-16 19:21:32.148432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:48.002 [2024-12-16 19:21:32.148439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:48.002 [2024-12-16 19:21:32.148445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:48.002 [2024-12-16 19:21:32.148452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:48.002 [2024-12-16 19:21:32.148457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:48.002 [2024-12-16 19:21:32.148464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:48.002 [2024-12-16 19:21:32.148469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:48.002 [2024-12-16 19:21:32.148478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:48.002 [2024-12-16 19:21:32.148483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:48.002 [2024-12-16 19:21:32.148492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:48.002 [2024-12-16 19:21:32.148497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:48.002 [2024-12-16 19:21:32.148504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:48.002 [2024-12-16 19:21:32.148509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:48.002 [2024-12-16 19:21:32.148516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:48.002 [2024-12-16 19:21:32.148521] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:48.002 [2024-12-16 19:21:32.148531] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:48.002 [2024-12-16 19:21:32.148536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:48.002 [2024-12-16 19:21:32.148543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:48.002 [2024-12-16 19:21:32.148549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:48.002 [2024-12-16 19:21:32.148556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:48.002 [2024-12-16 19:21:32.148561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.002 [2024-12-16 19:21:32.148568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:48.002 [2024-12-16 19:21:32.148574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:19:48.002 [2024-12-16 19:21:32.148580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.002 [2024-12-16 19:21:32.148661] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:48.002 [2024-12-16 19:21:32.148671] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:51.302 [2024-12-16 19:21:34.993000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.302 [2024-12-16 19:21:34.993058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:51.302 [2024-12-16 19:21:34.993073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2844.328 ms 00:19:51.302 [2024-12-16 19:21:34.993083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.302 [2024-12-16 19:21:35.018978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.302 [2024-12-16 19:21:35.019026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:51.302 [2024-12-16 19:21:35.019038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.646 ms 00:19:51.302 [2024-12-16 19:21:35.019049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.302 [2024-12-16 19:21:35.019196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.302 [2024-12-16 19:21:35.019219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:51.302 [2024-12-16 19:21:35.019243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:19:51.302 [2024-12-16 19:21:35.019254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.302 [2024-12-16 19:21:35.063823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.302 [2024-12-16 19:21:35.063870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:51.302 [2024-12-16 19:21:35.063883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.534 ms 00:19:51.302 [2024-12-16 19:21:35.063894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.302 [2024-12-16 19:21:35.063997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.302 [2024-12-16 19:21:35.064011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:51.302 [2024-12-16 19:21:35.064020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:51.302 [2024-12-16 19:21:35.064029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.302 [2024-12-16 19:21:35.064386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.302 [2024-12-16 19:21:35.064406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:51.302 [2024-12-16 19:21:35.064416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:19:51.302 [2024-12-16 19:21:35.064424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.302 [2024-12-16 19:21:35.064541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.302 [2024-12-16 19:21:35.064552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:51.302 [2024-12-16 19:21:35.064575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:19:51.302 [2024-12-16 19:21:35.064586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.302 [2024-12-16 19:21:35.079133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.302 [2024-12-16 19:21:35.079167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:51.302 [2024-12-16 19:21:35.079194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.517 ms 00:19:51.302 [2024-12-16 19:21:35.079204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.302 [2024-12-16 19:21:35.090531] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:51.302 [2024-12-16 19:21:35.105033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.302 [2024-12-16 19:21:35.105068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:51.302 [2024-12-16 19:21:35.105079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.727 ms 00:19:51.302 [2024-12-16 19:21:35.105087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.302 [2024-12-16 19:21:35.188445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.302 [2024-12-16 19:21:35.188492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:51.302 [2024-12-16 19:21:35.188507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.287 ms 00:19:51.302 [2024-12-16 19:21:35.188515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.302 [2024-12-16 19:21:35.188742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.302 [2024-12-16 19:21:35.188754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:51.302 [2024-12-16 19:21:35.188766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:19:51.302 [2024-12-16 19:21:35.188774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.302 [2024-12-16 19:21:35.211877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.302 [2024-12-16 19:21:35.211909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:51.302 [2024-12-16 19:21:35.211922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.073 ms 00:19:51.302 [2024-12-16 19:21:35.211930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.302 [2024-12-16 19:21:35.234584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.303 [2024-12-16 19:21:35.234613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:51.303 [2024-12-16 19:21:35.234626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.586 ms 00:19:51.303 [2024-12-16 19:21:35.234633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.303 [2024-12-16 19:21:35.235231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.303 [2024-12-16 19:21:35.235248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:51.303 [2024-12-16 19:21:35.235258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:19:51.303 [2024-12-16 19:21:35.235265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.303 [2024-12-16 19:21:35.305805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.303 [2024-12-16 19:21:35.305848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:51.303 [2024-12-16 19:21:35.305865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.505 ms 00:19:51.303 [2024-12-16 19:21:35.305873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.303 [2024-12-16 19:21:35.329988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.303 [2024-12-16 19:21:35.330022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:51.303 [2024-12-16 19:21:35.330035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.016 ms 00:19:51.303 [2024-12-16 19:21:35.330043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.303 [2024-12-16 19:21:35.353285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.303 [2024-12-16 19:21:35.353316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:51.303 [2024-12-16 19:21:35.353329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.178 ms 00:19:51.303 [2024-12-16 19:21:35.353336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.303 [2024-12-16 19:21:35.376790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.303 [2024-12-16 19:21:35.376834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:51.303 [2024-12-16 19:21:35.376846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.384 ms 00:19:51.303 [2024-12-16 19:21:35.376853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.303 [2024-12-16 19:21:35.376930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.303 [2024-12-16 19:21:35.376943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:51.303 [2024-12-16 19:21:35.376955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:51.303 [2024-12-16 19:21:35.376962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.303 [2024-12-16 19:21:35.377036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.303 [2024-12-16 19:21:35.377045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:51.303 [2024-12-16 19:21:35.377055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:51.303 [2024-12-16 19:21:35.377062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.303 [2024-12-16 19:21:35.377896] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:51.303 [2024-12-16 19:21:35.381029] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3242.911 ms, result 0 00:19:51.303 [2024-12-16 19:21:35.381927] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:51.303 { 00:19:51.303 "name": "ftl0", 00:19:51.303 "uuid": "36f840fd-a489-4625-9f63-997690818b8f" 00:19:51.303 } 00:19:51.303 19:21:35 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:51.303 19:21:35 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:51.303 19:21:35 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:51.303 19:21:35 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:51.303 19:21:35 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:51.303 19:21:35 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:51.303 19:21:35 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:51.303 19:21:35 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:51.563 [ 00:19:51.563 { 00:19:51.563 "name": "ftl0", 00:19:51.563 "aliases": [ 00:19:51.563 "36f840fd-a489-4625-9f63-997690818b8f" 00:19:51.563 ], 00:19:51.563 "product_name": "FTL disk", 00:19:51.563 "block_size": 4096, 00:19:51.563 "num_blocks": 23592960, 00:19:51.563 "uuid": "36f840fd-a489-4625-9f63-997690818b8f", 00:19:51.563 "assigned_rate_limits": { 00:19:51.563 "rw_ios_per_sec": 0, 00:19:51.563 "rw_mbytes_per_sec": 0, 00:19:51.563 "r_mbytes_per_sec": 0, 00:19:51.563 "w_mbytes_per_sec": 0 00:19:51.563 }, 00:19:51.563 "claimed": false, 00:19:51.563 "zoned": false, 00:19:51.563 "supported_io_types": { 00:19:51.563 "read": true, 00:19:51.563 "write": true, 00:19:51.563 "unmap": true, 00:19:51.563 "flush": true, 00:19:51.563 "reset": false, 00:19:51.563 "nvme_admin": false, 00:19:51.563 "nvme_io": false, 00:19:51.563 "nvme_io_md": false, 00:19:51.563 "write_zeroes": true, 00:19:51.563 "zcopy": false, 00:19:51.563 "get_zone_info": false, 00:19:51.563 "zone_management": false, 00:19:51.563 "zone_append": false, 00:19:51.563 "compare": false, 00:19:51.563 "compare_and_write": false, 00:19:51.563 "abort": false, 00:19:51.563 "seek_hole": false, 00:19:51.563 "seek_data": false, 00:19:51.563 "copy": false, 00:19:51.563 "nvme_iov_md": false 00:19:51.563 }, 00:19:51.563 "driver_specific": { 00:19:51.563 "ftl": { 00:19:51.563 "base_bdev": "adadfbb0-c75a-4443-83c5-a65b5eef6612", 00:19:51.563 "cache": "nvc0n1p0" 00:19:51.563 } 00:19:51.563 } 00:19:51.563 } 00:19:51.563 ] 00:19:51.563 19:21:35 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:51.563 19:21:35 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:51.563 19:21:35 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:51.822 19:21:36 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:51.822 19:21:36 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:52.084 19:21:36 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:52.084 { 00:19:52.084 "name": "ftl0", 00:19:52.084 "aliases": [ 00:19:52.084 "36f840fd-a489-4625-9f63-997690818b8f" 00:19:52.084 ], 00:19:52.084 "product_name": "FTL disk", 00:19:52.084 "block_size": 4096, 00:19:52.084 "num_blocks": 23592960, 00:19:52.084 "uuid": "36f840fd-a489-4625-9f63-997690818b8f", 00:19:52.084 "assigned_rate_limits": { 00:19:52.084 "rw_ios_per_sec": 0, 00:19:52.084 "rw_mbytes_per_sec": 0, 00:19:52.084 "r_mbytes_per_sec": 0, 00:19:52.084 "w_mbytes_per_sec": 0 00:19:52.084 }, 00:19:52.084 "claimed": false, 00:19:52.084 "zoned": false, 00:19:52.084 "supported_io_types": { 00:19:52.084 "read": true, 00:19:52.084 "write": true, 00:19:52.084 "unmap": true, 00:19:52.084 "flush": true, 00:19:52.084 "reset": false, 00:19:52.084 "nvme_admin": false, 00:19:52.084 "nvme_io": false, 00:19:52.084 "nvme_io_md": false, 00:19:52.084 "write_zeroes": true, 00:19:52.084 "zcopy": false, 00:19:52.084 "get_zone_info": false, 00:19:52.084 "zone_management": false, 00:19:52.084 "zone_append": false, 00:19:52.084 "compare": false, 00:19:52.084 "compare_and_write": false, 00:19:52.084 "abort": false, 00:19:52.084 "seek_hole": false, 00:19:52.084 "seek_data": false, 00:19:52.084 "copy": false, 00:19:52.084 "nvme_iov_md": false 00:19:52.084 }, 00:19:52.084 "driver_specific": { 00:19:52.084 "ftl": { 00:19:52.084 "base_bdev": "adadfbb0-c75a-4443-83c5-a65b5eef6612", 00:19:52.084 "cache": "nvc0n1p0" 00:19:52.084 } 00:19:52.084 } 00:19:52.084 } 00:19:52.084 ]' 00:19:52.084 19:21:36 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:52.084 19:21:36 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:52.084 19:21:36 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:52.084 [2024-12-16 19:21:36.417542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.084 [2024-12-16 19:21:36.417587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:52.084 [2024-12-16 19:21:36.417602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:52.084 [2024-12-16 19:21:36.417614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.084 [2024-12-16 19:21:36.417645] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:52.084 [2024-12-16 19:21:36.420241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.084 [2024-12-16 19:21:36.420269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:52.084 [2024-12-16 19:21:36.420283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.578 ms 00:19:52.084 [2024-12-16 19:21:36.420292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.084 [2024-12-16 19:21:36.420884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.084 [2024-12-16 19:21:36.420904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:52.084 [2024-12-16 19:21:36.420914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:19:52.084 [2024-12-16 19:21:36.420922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.084 [2024-12-16 19:21:36.424574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.084 [2024-12-16 19:21:36.424597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:52.084 [2024-12-16 19:21:36.424608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.624 ms 00:19:52.084 [2024-12-16 19:21:36.424617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.084 [2024-12-16 19:21:36.431654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.084 [2024-12-16 19:21:36.431682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:52.084 [2024-12-16 19:21:36.431694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.977 ms 00:19:52.084 [2024-12-16 19:21:36.431701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.346 [2024-12-16 19:21:36.455479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.346 [2024-12-16 19:21:36.455511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:52.346 [2024-12-16 19:21:36.455526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.704 ms 00:19:52.346 [2024-12-16 19:21:36.455533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.346 [2024-12-16 19:21:36.470220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.346 [2024-12-16 19:21:36.470348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:52.346 [2024-12-16 19:21:36.470368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.626 ms 00:19:52.346 [2024-12-16 19:21:36.470394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.346 [2024-12-16 19:21:36.470602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.346 [2024-12-16 19:21:36.470613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:52.346 [2024-12-16 19:21:36.470624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:19:52.346 [2024-12-16 19:21:36.470631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.346 [2024-12-16 19:21:36.493518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.346 [2024-12-16 19:21:36.493634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:52.346 [2024-12-16 19:21:36.493652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.851 ms 00:19:52.346 [2024-12-16 19:21:36.493659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.346 [2024-12-16 19:21:36.516238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.346 [2024-12-16 19:21:36.516344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:52.346 [2024-12-16 19:21:36.516363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.523 ms 00:19:52.346 [2024-12-16 19:21:36.516370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.346 [2024-12-16 19:21:36.538505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.346 [2024-12-16 19:21:36.538535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:52.346 [2024-12-16 19:21:36.538547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.076 ms 00:19:52.346 [2024-12-16 19:21:36.538553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.346 [2024-12-16 19:21:36.560847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.346 [2024-12-16 19:21:36.560877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:52.346 [2024-12-16 19:21:36.560889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.187 ms 00:19:52.346 [2024-12-16 19:21:36.560896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.346 [2024-12-16 19:21:36.560952] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:52.346 [2024-12-16 19:21:36.560967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.560978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.560985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.560995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:52.347 [2024-12-16 19:21:36.561738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:52.348 [2024-12-16 19:21:36.561745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:52.348 [2024-12-16 19:21:36.561754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:52.348 [2024-12-16 19:21:36.561761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:52.348 [2024-12-16 19:21:36.561769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:52.348 [2024-12-16 19:21:36.561778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:52.348 [2024-12-16 19:21:36.561788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:52.348 [2024-12-16 19:21:36.561794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:52.348 [2024-12-16 19:21:36.561803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:52.348 [2024-12-16 19:21:36.561810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:52.348 [2024-12-16 19:21:36.561820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:52.348 [2024-12-16 19:21:36.561835] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:52.348 [2024-12-16 19:21:36.561845] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36f840fd-a489-4625-9f63-997690818b8f 00:19:52.348 [2024-12-16 19:21:36.561853] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:52.348 [2024-12-16 19:21:36.561861] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:52.348 [2024-12-16 19:21:36.561868] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:52.348 [2024-12-16 19:21:36.561879] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:52.348 [2024-12-16 19:21:36.561885] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:52.348 [2024-12-16 19:21:36.561894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:52.348 [2024-12-16 19:21:36.561901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:52.348 [2024-12-16 19:21:36.561909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:52.348 [2024-12-16 19:21:36.561915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:52.348 [2024-12-16 19:21:36.561924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.348 [2024-12-16 19:21:36.561931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:52.348 [2024-12-16 19:21:36.561940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:19:52.348 [2024-12-16 19:21:36.561947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.348 [2024-12-16 19:21:36.574390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.348 [2024-12-16 19:21:36.574420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:52.348 [2024-12-16 19:21:36.574433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.406 ms 00:19:52.348 [2024-12-16 19:21:36.574441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.348 [2024-12-16 19:21:36.574808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.348 [2024-12-16 19:21:36.574818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:52.348 [2024-12-16 19:21:36.574828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:19:52.348 [2024-12-16 19:21:36.574835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.348 [2024-12-16 19:21:36.618820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.348 [2024-12-16 19:21:36.618853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:52.348 [2024-12-16 19:21:36.618864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.348 [2024-12-16 19:21:36.618872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.348 [2024-12-16 19:21:36.618970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.348 [2024-12-16 19:21:36.618980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:52.348 [2024-12-16 19:21:36.618989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.348 [2024-12-16 19:21:36.618996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.348 [2024-12-16 19:21:36.619058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.348 [2024-12-16 19:21:36.619067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:52.348 [2024-12-16 19:21:36.619080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.348 [2024-12-16 19:21:36.619087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.348 [2024-12-16 19:21:36.619124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.348 [2024-12-16 19:21:36.619132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:52.348 [2024-12-16 19:21:36.619140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.348 [2024-12-16 19:21:36.619148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.609 [2024-12-16 19:21:36.701269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.609 [2024-12-16 19:21:36.701313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:52.609 [2024-12-16 19:21:36.701325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.609 [2024-12-16 19:21:36.701333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.609 [2024-12-16 19:21:36.764259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.609 [2024-12-16 19:21:36.764298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:52.609 [2024-12-16 19:21:36.764310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.609 [2024-12-16 19:21:36.764317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.609 [2024-12-16 19:21:36.764417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.609 [2024-12-16 19:21:36.764426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:52.609 [2024-12-16 19:21:36.764438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.609 [2024-12-16 19:21:36.764448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.609 [2024-12-16 19:21:36.764506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.609 [2024-12-16 19:21:36.764513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:52.609 [2024-12-16 19:21:36.764523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.609 [2024-12-16 19:21:36.764530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.609 [2024-12-16 19:21:36.764631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.609 [2024-12-16 19:21:36.764640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:52.609 [2024-12-16 19:21:36.764650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.609 [2024-12-16 19:21:36.764659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.609 [2024-12-16 19:21:36.764715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.609 [2024-12-16 19:21:36.764724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:52.609 [2024-12-16 19:21:36.764733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.609 [2024-12-16 19:21:36.764740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.609 [2024-12-16 19:21:36.764788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.609 [2024-12-16 19:21:36.764796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:52.609 [2024-12-16 19:21:36.764808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.609 [2024-12-16 19:21:36.764815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.609 [2024-12-16 19:21:36.764871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.609 [2024-12-16 19:21:36.764880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:52.609 [2024-12-16 19:21:36.764889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.609 [2024-12-16 19:21:36.764896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.609 [2024-12-16 19:21:36.765077] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 347.505 ms, result 0 00:19:52.609 true 00:19:52.609 19:21:36 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78113 00:19:52.609 19:21:36 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78113 ']' 00:19:52.609 19:21:36 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78113 00:19:52.609 19:21:36 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:52.609 19:21:36 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.609 19:21:36 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78113 00:19:52.609 killing process with pid 78113 00:19:52.609 19:21:36 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:52.609 19:21:36 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:52.609 19:21:36 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78113' 00:19:52.609 19:21:36 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78113 00:19:52.609 19:21:36 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78113 00:19:59.169 19:21:42 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:59.430 65536+0 records in 00:19:59.430 65536+0 records out 00:19:59.430 268435456 bytes (268 MB, 256 MiB) copied, 1.09567 s, 245 MB/s 00:19:59.430 19:21:43 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:59.691 [2024-12-16 19:21:43.830334] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:59.691 [2024-12-16 19:21:43.830463] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78306 ] 00:19:59.691 [2024-12-16 19:21:43.992228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.951 [2024-12-16 19:21:44.091371] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.214 [2024-12-16 19:21:44.352759] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:00.214 [2024-12-16 19:21:44.352827] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:00.214 [2024-12-16 19:21:44.508999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.214 [2024-12-16 19:21:44.509227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:00.214 [2024-12-16 19:21:44.509248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:00.214 [2024-12-16 19:21:44.509256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.214 [2024-12-16 19:21:44.511934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.214 [2024-12-16 19:21:44.511973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:00.214 [2024-12-16 19:21:44.511983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.655 ms 00:20:00.214 [2024-12-16 19:21:44.511990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.214 [2024-12-16 19:21:44.512077] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:00.214 [2024-12-16 19:21:44.512880] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:00.214 [2024-12-16 19:21:44.512915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.214 [2024-12-16 19:21:44.512924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:00.214 [2024-12-16 19:21:44.512933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.845 ms 00:20:00.214 [2024-12-16 19:21:44.512940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.214 [2024-12-16 19:21:44.514079] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:00.214 [2024-12-16 19:21:44.526862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.214 [2024-12-16 19:21:44.526896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:00.214 [2024-12-16 19:21:44.526907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.785 ms 00:20:00.214 [2024-12-16 19:21:44.526914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.214 [2024-12-16 19:21:44.527000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.214 [2024-12-16 19:21:44.527011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:00.214 [2024-12-16 19:21:44.527020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:00.214 [2024-12-16 19:21:44.527027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.214 [2024-12-16 19:21:44.532068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.214 [2024-12-16 19:21:44.532099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:00.214 [2024-12-16 19:21:44.532109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.001 ms 00:20:00.214 [2024-12-16 19:21:44.532116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.214 [2024-12-16 19:21:44.532219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.214 [2024-12-16 19:21:44.532229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:00.214 [2024-12-16 19:21:44.532238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:00.214 [2024-12-16 19:21:44.532245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.214 [2024-12-16 19:21:44.532271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.214 [2024-12-16 19:21:44.532279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:00.214 [2024-12-16 19:21:44.532287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:00.214 [2024-12-16 19:21:44.532295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.214 [2024-12-16 19:21:44.532315] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:00.214 [2024-12-16 19:21:44.535631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.214 [2024-12-16 19:21:44.535660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:00.214 [2024-12-16 19:21:44.535669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.321 ms 00:20:00.214 [2024-12-16 19:21:44.535677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.214 [2024-12-16 19:21:44.535713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.214 [2024-12-16 19:21:44.535722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:00.214 [2024-12-16 19:21:44.535730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:00.214 [2024-12-16 19:21:44.535737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.214 [2024-12-16 19:21:44.535757] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:00.214 [2024-12-16 19:21:44.535789] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:00.214 [2024-12-16 19:21:44.535822] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:00.214 [2024-12-16 19:21:44.535836] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:00.214 [2024-12-16 19:21:44.535938] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:00.214 [2024-12-16 19:21:44.535948] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:00.214 [2024-12-16 19:21:44.535957] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:00.214 [2024-12-16 19:21:44.535969] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:00.214 [2024-12-16 19:21:44.535978] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:00.214 [2024-12-16 19:21:44.535986] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:00.214 [2024-12-16 19:21:44.535992] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:00.214 [2024-12-16 19:21:44.536000] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:00.214 [2024-12-16 19:21:44.536007] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:00.214 [2024-12-16 19:21:44.536015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.214 [2024-12-16 19:21:44.536021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:00.214 [2024-12-16 19:21:44.536029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:20:00.214 [2024-12-16 19:21:44.536036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.214 [2024-12-16 19:21:44.536123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.214 [2024-12-16 19:21:44.536134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:00.214 [2024-12-16 19:21:44.536141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:00.214 [2024-12-16 19:21:44.536148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.214 [2024-12-16 19:21:44.536271] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:00.214 [2024-12-16 19:21:44.536283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:00.214 [2024-12-16 19:21:44.536290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.214 [2024-12-16 19:21:44.536298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.214 [2024-12-16 19:21:44.536305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:00.214 [2024-12-16 19:21:44.536312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:00.214 [2024-12-16 19:21:44.536318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:00.214 [2024-12-16 19:21:44.536325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:00.214 [2024-12-16 19:21:44.536331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:00.214 [2024-12-16 19:21:44.536337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.214 [2024-12-16 19:21:44.536344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:00.214 [2024-12-16 19:21:44.536356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:00.214 [2024-12-16 19:21:44.536363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.214 [2024-12-16 19:21:44.536370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:00.214 [2024-12-16 19:21:44.536377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:00.214 [2024-12-16 19:21:44.536384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.214 [2024-12-16 19:21:44.536390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:00.214 [2024-12-16 19:21:44.536397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:00.214 [2024-12-16 19:21:44.536403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.214 [2024-12-16 19:21:44.536410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:00.214 [2024-12-16 19:21:44.536416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:00.215 [2024-12-16 19:21:44.536423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.215 [2024-12-16 19:21:44.536429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:00.215 [2024-12-16 19:21:44.536436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:00.215 [2024-12-16 19:21:44.536442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.215 [2024-12-16 19:21:44.536448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:00.215 [2024-12-16 19:21:44.536454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:00.215 [2024-12-16 19:21:44.536461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.215 [2024-12-16 19:21:44.536467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:00.215 [2024-12-16 19:21:44.536474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:00.215 [2024-12-16 19:21:44.536480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.215 [2024-12-16 19:21:44.536486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:00.215 [2024-12-16 19:21:44.536492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:00.215 [2024-12-16 19:21:44.536498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.215 [2024-12-16 19:21:44.536504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:00.215 [2024-12-16 19:21:44.536510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:00.215 [2024-12-16 19:21:44.536517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.215 [2024-12-16 19:21:44.536523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:00.215 [2024-12-16 19:21:44.536530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:00.215 [2024-12-16 19:21:44.536536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.215 [2024-12-16 19:21:44.536542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:00.215 [2024-12-16 19:21:44.536549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:00.215 [2024-12-16 19:21:44.536555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.215 [2024-12-16 19:21:44.536561] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:00.215 [2024-12-16 19:21:44.536569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:00.215 [2024-12-16 19:21:44.536578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.215 [2024-12-16 19:21:44.536585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.215 [2024-12-16 19:21:44.536593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:00.215 [2024-12-16 19:21:44.536599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:00.215 [2024-12-16 19:21:44.536606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:00.215 [2024-12-16 19:21:44.536612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:00.215 [2024-12-16 19:21:44.536619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:00.215 [2024-12-16 19:21:44.536625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:00.215 [2024-12-16 19:21:44.536633] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:00.215 [2024-12-16 19:21:44.536642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.215 [2024-12-16 19:21:44.536651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:00.215 [2024-12-16 19:21:44.536658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:00.215 [2024-12-16 19:21:44.536665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:00.215 [2024-12-16 19:21:44.536672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:00.215 [2024-12-16 19:21:44.536680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:00.215 [2024-12-16 19:21:44.536687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:00.215 [2024-12-16 19:21:44.536693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:00.215 [2024-12-16 19:21:44.536700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:00.215 [2024-12-16 19:21:44.536707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:00.215 [2024-12-16 19:21:44.536714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:00.215 [2024-12-16 19:21:44.536721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:00.215 [2024-12-16 19:21:44.536728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:00.215 [2024-12-16 19:21:44.536735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:00.215 [2024-12-16 19:21:44.536743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:00.215 [2024-12-16 19:21:44.536749] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:00.215 [2024-12-16 19:21:44.536758] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.215 [2024-12-16 19:21:44.536765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:00.215 [2024-12-16 19:21:44.536774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:00.215 [2024-12-16 19:21:44.536780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:00.215 [2024-12-16 19:21:44.536787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:00.215 [2024-12-16 19:21:44.536794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.215 [2024-12-16 19:21:44.536804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:00.215 [2024-12-16 19:21:44.536811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:20:00.215 [2024-12-16 19:21:44.536819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.215 [2024-12-16 19:21:44.563046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.215 [2024-12-16 19:21:44.563211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:00.215 [2024-12-16 19:21:44.563228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.176 ms 00:20:00.215 [2024-12-16 19:21:44.563236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.215 [2024-12-16 19:21:44.563355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.215 [2024-12-16 19:21:44.563365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:00.215 [2024-12-16 19:21:44.563374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:00.215 [2024-12-16 19:21:44.563381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-12-16 19:21:44.605970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-12-16 19:21:44.606127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:00.477 [2024-12-16 19:21:44.606150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.568 ms 00:20:00.477 [2024-12-16 19:21:44.606158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-12-16 19:21:44.606268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-12-16 19:21:44.606281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:00.477 [2024-12-16 19:21:44.606291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:00.477 [2024-12-16 19:21:44.606298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-12-16 19:21:44.606667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-12-16 19:21:44.606689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:00.477 [2024-12-16 19:21:44.606699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:20:00.477 [2024-12-16 19:21:44.606712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-12-16 19:21:44.606841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-12-16 19:21:44.606856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:00.477 [2024-12-16 19:21:44.606864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:20:00.477 [2024-12-16 19:21:44.606870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-12-16 19:21:44.620848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-12-16 19:21:44.620980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:00.477 [2024-12-16 19:21:44.620996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.958 ms 00:20:00.477 [2024-12-16 19:21:44.621004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-12-16 19:21:44.633970] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:00.477 [2024-12-16 19:21:44.634007] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:00.477 [2024-12-16 19:21:44.634019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-12-16 19:21:44.634027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:00.477 [2024-12-16 19:21:44.634036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.911 ms 00:20:00.477 [2024-12-16 19:21:44.634043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-12-16 19:21:44.658620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-12-16 19:21:44.658659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:00.477 [2024-12-16 19:21:44.658670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.504 ms 00:20:00.477 [2024-12-16 19:21:44.658678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-12-16 19:21:44.670946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-12-16 19:21:44.670980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:00.477 [2024-12-16 19:21:44.670990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.193 ms 00:20:00.477 [2024-12-16 19:21:44.670997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-12-16 19:21:44.682987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-12-16 19:21:44.683126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:00.477 [2024-12-16 19:21:44.683146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.921 ms 00:20:00.477 [2024-12-16 19:21:44.683156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-12-16 19:21:44.683806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-12-16 19:21:44.683830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:00.477 [2024-12-16 19:21:44.683840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:20:00.477 [2024-12-16 19:21:44.683848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-12-16 19:21:44.742570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-12-16 19:21:44.742625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:00.477 [2024-12-16 19:21:44.742639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.696 ms 00:20:00.477 [2024-12-16 19:21:44.742647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-12-16 19:21:44.753405] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:00.477 [2024-12-16 19:21:44.771637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-12-16 19:21:44.771687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:00.477 [2024-12-16 19:21:44.771699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.883 ms 00:20:00.477 [2024-12-16 19:21:44.771708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-12-16 19:21:44.771804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-12-16 19:21:44.771815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:00.477 [2024-12-16 19:21:44.771826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:00.477 [2024-12-16 19:21:44.771834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-12-16 19:21:44.771890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-12-16 19:21:44.771899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:00.477 [2024-12-16 19:21:44.771908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:00.477 [2024-12-16 19:21:44.771916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-12-16 19:21:44.771948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-12-16 19:21:44.771959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:00.477 [2024-12-16 19:21:44.771967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:00.477 [2024-12-16 19:21:44.771979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-12-16 19:21:44.772017] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:00.477 [2024-12-16 19:21:44.772027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-12-16 19:21:44.772036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:00.477 [2024-12-16 19:21:44.772044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:00.477 [2024-12-16 19:21:44.772052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-12-16 19:21:44.798189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-12-16 19:21:44.798243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:00.477 [2024-12-16 19:21:44.798258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.115 ms 00:20:00.478 [2024-12-16 19:21:44.798267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.478 [2024-12-16 19:21:44.798397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.478 [2024-12-16 19:21:44.798409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:00.478 [2024-12-16 19:21:44.798419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:00.478 [2024-12-16 19:21:44.798427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.478 [2024-12-16 19:21:44.799528] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:00.478 [2024-12-16 19:21:44.803183] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 290.171 ms, result 0 00:20:00.478 [2024-12-16 19:21:44.804256] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:00.478 [2024-12-16 19:21:44.818100] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:01.861  [2024-12-16T19:21:47.155Z] Copying: 17/256 [MB] (17 MBps) [2024-12-16T19:21:48.097Z] Copying: 35/256 [MB] (17 MBps) [2024-12-16T19:21:49.040Z] Copying: 50/256 [MB] (14 MBps) [2024-12-16T19:21:49.984Z] Copying: 64/256 [MB] (14 MBps) [2024-12-16T19:21:50.926Z] Copying: 75/256 [MB] (10 MBps) [2024-12-16T19:21:51.869Z] Copying: 94/256 [MB] (19 MBps) [2024-12-16T19:21:53.256Z] Copying: 116/256 [MB] (22 MBps) [2024-12-16T19:21:53.828Z] Copying: 152/256 [MB] (35 MBps) [2024-12-16T19:21:55.222Z] Copying: 188/256 [MB] (36 MBps) [2024-12-16T19:21:56.170Z] Copying: 204/256 [MB] (15 MBps) [2024-12-16T19:21:57.115Z] Copying: 218/256 [MB] (14 MBps) [2024-12-16T19:21:57.376Z] Copying: 242/256 [MB] (23 MBps) [2024-12-16T19:21:57.376Z] Copying: 256/256 [MB] (average 20 MBps)[2024-12-16 19:21:57.337280] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:13.022 [2024-12-16 19:21:57.344651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.022 [2024-12-16 19:21:57.344768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:13.022 [2024-12-16 19:21:57.344782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:13.022 [2024-12-16 19:21:57.344789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.022 [2024-12-16 19:21:57.344813] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:13.022 [2024-12-16 19:21:57.346861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.022 [2024-12-16 19:21:57.346882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:13.022 [2024-12-16 19:21:57.346890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.038 ms 00:20:13.022 [2024-12-16 19:21:57.346897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.022 [2024-12-16 19:21:57.348415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.022 [2024-12-16 19:21:57.348439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:13.022 [2024-12-16 19:21:57.348446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.502 ms 00:20:13.022 [2024-12-16 19:21:57.348452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.022 [2024-12-16 19:21:57.354082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.022 [2024-12-16 19:21:57.354111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:13.022 [2024-12-16 19:21:57.354119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.617 ms 00:20:13.022 [2024-12-16 19:21:57.354125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.022 [2024-12-16 19:21:57.359485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.022 [2024-12-16 19:21:57.359583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:13.022 [2024-12-16 19:21:57.359594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.336 ms 00:20:13.022 [2024-12-16 19:21:57.359600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.285 [2024-12-16 19:21:57.377369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.285 [2024-12-16 19:21:57.377395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:13.285 [2024-12-16 19:21:57.377404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.732 ms 00:20:13.285 [2024-12-16 19:21:57.377411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.285 [2024-12-16 19:21:57.388851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.285 [2024-12-16 19:21:57.388878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:13.285 [2024-12-16 19:21:57.388889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.413 ms 00:20:13.285 [2024-12-16 19:21:57.388895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.285 [2024-12-16 19:21:57.388992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.285 [2024-12-16 19:21:57.389000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:13.285 [2024-12-16 19:21:57.389006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:13.285 [2024-12-16 19:21:57.389016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.285 [2024-12-16 19:21:57.406508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.285 [2024-12-16 19:21:57.406608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:13.285 [2024-12-16 19:21:57.406620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.480 ms 00:20:13.285 [2024-12-16 19:21:57.406625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.285 [2024-12-16 19:21:57.423990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.285 [2024-12-16 19:21:57.424014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:13.285 [2024-12-16 19:21:57.424021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.334 ms 00:20:13.285 [2024-12-16 19:21:57.424027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.285 [2024-12-16 19:21:57.440779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.285 [2024-12-16 19:21:57.440802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:13.285 [2024-12-16 19:21:57.440810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.727 ms 00:20:13.285 [2024-12-16 19:21:57.440815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.285 [2024-12-16 19:21:57.457571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.285 [2024-12-16 19:21:57.457667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:13.285 [2024-12-16 19:21:57.457679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.711 ms 00:20:13.285 [2024-12-16 19:21:57.457684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.285 [2024-12-16 19:21:57.457707] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:13.285 [2024-12-16 19:21:57.457718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:13.285 [2024-12-16 19:21:57.457725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:13.285 [2024-12-16 19:21:57.457730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:13.285 [2024-12-16 19:21:57.457736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:13.285 [2024-12-16 19:21:57.457741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:13.285 [2024-12-16 19:21:57.457747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:13.285 [2024-12-16 19:21:57.457752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:13.285 [2024-12-16 19:21:57.457758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:13.285 [2024-12-16 19:21:57.457763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:13.285 [2024-12-16 19:21:57.457769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:13.285 [2024-12-16 19:21:57.457774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:13.285 [2024-12-16 19:21:57.457780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:13.285 [2024-12-16 19:21:57.457785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:13.285 [2024-12-16 19:21:57.457790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:13.285 [2024-12-16 19:21:57.457796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:13.285 [2024-12-16 19:21:57.457801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.457995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:13.286 [2024-12-16 19:21:57.458301] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:13.286 [2024-12-16 19:21:57.458307] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36f840fd-a489-4625-9f63-997690818b8f 00:20:13.286 [2024-12-16 19:21:57.458313] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:13.286 [2024-12-16 19:21:57.458319] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:13.286 [2024-12-16 19:21:57.458324] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:13.286 [2024-12-16 19:21:57.458330] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:13.286 [2024-12-16 19:21:57.458335] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:13.286 [2024-12-16 19:21:57.458341] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:13.287 [2024-12-16 19:21:57.458349] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:13.287 [2024-12-16 19:21:57.458354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:13.287 [2024-12-16 19:21:57.458359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:13.287 [2024-12-16 19:21:57.458364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.287 [2024-12-16 19:21:57.458384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:13.287 [2024-12-16 19:21:57.458459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.657 ms 00:20:13.287 [2024-12-16 19:21:57.458465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.287 [2024-12-16 19:21:57.467817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.287 [2024-12-16 19:21:57.467838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:13.287 [2024-12-16 19:21:57.467846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.338 ms 00:20:13.287 [2024-12-16 19:21:57.467852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.287 [2024-12-16 19:21:57.468129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.287 [2024-12-16 19:21:57.468140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:13.287 [2024-12-16 19:21:57.468146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:20:13.287 [2024-12-16 19:21:57.468152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.287 [2024-12-16 19:21:57.495369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.287 [2024-12-16 19:21:57.495394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:13.287 [2024-12-16 19:21:57.495401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.287 [2024-12-16 19:21:57.495408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.287 [2024-12-16 19:21:57.495462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.287 [2024-12-16 19:21:57.495468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:13.287 [2024-12-16 19:21:57.495474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.287 [2024-12-16 19:21:57.495479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.287 [2024-12-16 19:21:57.495509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.287 [2024-12-16 19:21:57.495516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:13.287 [2024-12-16 19:21:57.495522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.287 [2024-12-16 19:21:57.495527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.287 [2024-12-16 19:21:57.495540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.287 [2024-12-16 19:21:57.495548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:13.287 [2024-12-16 19:21:57.495553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.287 [2024-12-16 19:21:57.495559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.287 [2024-12-16 19:21:57.553614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.287 [2024-12-16 19:21:57.553739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:13.287 [2024-12-16 19:21:57.553753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.287 [2024-12-16 19:21:57.553759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.287 [2024-12-16 19:21:57.601139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.287 [2024-12-16 19:21:57.601262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:13.287 [2024-12-16 19:21:57.601274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.287 [2024-12-16 19:21:57.601280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.287 [2024-12-16 19:21:57.601318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.287 [2024-12-16 19:21:57.601326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:13.287 [2024-12-16 19:21:57.601332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.287 [2024-12-16 19:21:57.601337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.287 [2024-12-16 19:21:57.601358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.287 [2024-12-16 19:21:57.601364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:13.287 [2024-12-16 19:21:57.601374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.287 [2024-12-16 19:21:57.601380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.287 [2024-12-16 19:21:57.601450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.287 [2024-12-16 19:21:57.601458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:13.287 [2024-12-16 19:21:57.601464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.287 [2024-12-16 19:21:57.601470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.287 [2024-12-16 19:21:57.601493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.287 [2024-12-16 19:21:57.601500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:13.287 [2024-12-16 19:21:57.601506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.287 [2024-12-16 19:21:57.601514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.287 [2024-12-16 19:21:57.601543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.287 [2024-12-16 19:21:57.601550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:13.287 [2024-12-16 19:21:57.601556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.287 [2024-12-16 19:21:57.601561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.287 [2024-12-16 19:21:57.601593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.287 [2024-12-16 19:21:57.601600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:13.287 [2024-12-16 19:21:57.601608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.287 [2024-12-16 19:21:57.601614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.287 [2024-12-16 19:21:57.601713] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 257.070 ms, result 0 00:20:14.230 00:20:14.230 00:20:14.230 19:21:58 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78464 00:20:14.230 19:21:58 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78464 00:20:14.230 19:21:58 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:14.230 19:21:58 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78464 ']' 00:20:14.230 19:21:58 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.230 19:21:58 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.230 19:21:58 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.230 19:21:58 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.230 19:21:58 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:14.230 [2024-12-16 19:21:58.450320] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:14.230 [2024-12-16 19:21:58.450456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78464 ] 00:20:14.491 [2024-12-16 19:21:58.603622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.491 [2024-12-16 19:21:58.677643] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.064 19:21:59 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.064 19:21:59 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:15.064 19:21:59 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:15.325 [2024-12-16 19:21:59.481711] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:15.325 [2024-12-16 19:21:59.481758] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:15.325 [2024-12-16 19:21:59.640558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.325 [2024-12-16 19:21:59.640593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:15.325 [2024-12-16 19:21:59.640605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:15.325 [2024-12-16 19:21:59.640612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.325 [2024-12-16 19:21:59.642688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.325 [2024-12-16 19:21:59.642820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:15.325 [2024-12-16 19:21:59.642835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.061 ms 00:20:15.325 [2024-12-16 19:21:59.642841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.325 [2024-12-16 19:21:59.642903] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:15.325 [2024-12-16 19:21:59.643449] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:15.325 [2024-12-16 19:21:59.643467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.325 [2024-12-16 19:21:59.643473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:15.325 [2024-12-16 19:21:59.643481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:20:15.325 [2024-12-16 19:21:59.643487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.325 [2024-12-16 19:21:59.644664] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:15.325 [2024-12-16 19:21:59.654213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.325 [2024-12-16 19:21:59.654243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:15.325 [2024-12-16 19:21:59.654253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.552 ms 00:20:15.325 [2024-12-16 19:21:59.654261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.325 [2024-12-16 19:21:59.654328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.325 [2024-12-16 19:21:59.654340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:15.326 [2024-12-16 19:21:59.654346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:15.326 [2024-12-16 19:21:59.654353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.326 [2024-12-16 19:21:59.658588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.326 [2024-12-16 19:21:59.658615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:15.326 [2024-12-16 19:21:59.658623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.190 ms 00:20:15.326 [2024-12-16 19:21:59.658629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.326 [2024-12-16 19:21:59.658707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.326 [2024-12-16 19:21:59.658716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:15.326 [2024-12-16 19:21:59.658723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:15.326 [2024-12-16 19:21:59.658730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.326 [2024-12-16 19:21:59.658752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.326 [2024-12-16 19:21:59.658760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:15.326 [2024-12-16 19:21:59.658766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:15.326 [2024-12-16 19:21:59.658772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.326 [2024-12-16 19:21:59.658790] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:15.326 [2024-12-16 19:21:59.661505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.326 [2024-12-16 19:21:59.661616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:15.326 [2024-12-16 19:21:59.661632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.718 ms 00:20:15.326 [2024-12-16 19:21:59.661638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.326 [2024-12-16 19:21:59.661669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.326 [2024-12-16 19:21:59.661676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:15.326 [2024-12-16 19:21:59.661683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:15.326 [2024-12-16 19:21:59.661690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.326 [2024-12-16 19:21:59.661706] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:15.326 [2024-12-16 19:21:59.661722] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:15.326 [2024-12-16 19:21:59.661754] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:15.326 [2024-12-16 19:21:59.661766] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:15.326 [2024-12-16 19:21:59.661848] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:15.326 [2024-12-16 19:21:59.661856] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:15.326 [2024-12-16 19:21:59.661867] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:15.326 [2024-12-16 19:21:59.661874] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:15.326 [2024-12-16 19:21:59.661882] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:15.326 [2024-12-16 19:21:59.661889] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:15.326 [2024-12-16 19:21:59.661895] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:15.326 [2024-12-16 19:21:59.661901] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:15.326 [2024-12-16 19:21:59.661909] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:15.326 [2024-12-16 19:21:59.661914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.326 [2024-12-16 19:21:59.661921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:15.326 [2024-12-16 19:21:59.661927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:20:15.326 [2024-12-16 19:21:59.661934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.326 [2024-12-16 19:21:59.662001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.326 [2024-12-16 19:21:59.662008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:15.326 [2024-12-16 19:21:59.662013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:15.326 [2024-12-16 19:21:59.662020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.326 [2024-12-16 19:21:59.662097] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:15.326 [2024-12-16 19:21:59.662105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:15.326 [2024-12-16 19:21:59.662110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:15.326 [2024-12-16 19:21:59.662117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.326 [2024-12-16 19:21:59.662123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:15.326 [2024-12-16 19:21:59.662130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:15.326 [2024-12-16 19:21:59.662135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:15.326 [2024-12-16 19:21:59.662143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:15.326 [2024-12-16 19:21:59.662149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:15.326 [2024-12-16 19:21:59.662155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:15.326 [2024-12-16 19:21:59.662160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:15.326 [2024-12-16 19:21:59.662166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:15.326 [2024-12-16 19:21:59.662187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:15.326 [2024-12-16 19:21:59.662195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:15.326 [2024-12-16 19:21:59.662200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:15.326 [2024-12-16 19:21:59.662206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.326 [2024-12-16 19:21:59.662212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:15.326 [2024-12-16 19:21:59.662219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:15.326 [2024-12-16 19:21:59.662228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.326 [2024-12-16 19:21:59.662235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:15.326 [2024-12-16 19:21:59.662240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:15.326 [2024-12-16 19:21:59.662246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.326 [2024-12-16 19:21:59.662251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:15.326 [2024-12-16 19:21:59.662259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:15.326 [2024-12-16 19:21:59.662264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.326 [2024-12-16 19:21:59.662270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:15.326 [2024-12-16 19:21:59.662275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:15.326 [2024-12-16 19:21:59.662281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.326 [2024-12-16 19:21:59.662286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:15.326 [2024-12-16 19:21:59.662293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:15.326 [2024-12-16 19:21:59.662298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.326 [2024-12-16 19:21:59.662304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:15.326 [2024-12-16 19:21:59.662309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:15.326 [2024-12-16 19:21:59.662315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:15.326 [2024-12-16 19:21:59.662320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:15.326 [2024-12-16 19:21:59.662326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:15.326 [2024-12-16 19:21:59.662331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:15.326 [2024-12-16 19:21:59.662337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:15.326 [2024-12-16 19:21:59.662342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:15.326 [2024-12-16 19:21:59.662349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.326 [2024-12-16 19:21:59.662355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:15.326 [2024-12-16 19:21:59.662361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:15.326 [2024-12-16 19:21:59.662365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.326 [2024-12-16 19:21:59.662386] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:15.326 [2024-12-16 19:21:59.662393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:15.326 [2024-12-16 19:21:59.662400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:15.326 [2024-12-16 19:21:59.662405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.326 [2024-12-16 19:21:59.662412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:15.326 [2024-12-16 19:21:59.662418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:15.326 [2024-12-16 19:21:59.662425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:15.326 [2024-12-16 19:21:59.662430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:15.326 [2024-12-16 19:21:59.662436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:15.326 [2024-12-16 19:21:59.662441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:15.326 [2024-12-16 19:21:59.662449] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:15.326 [2024-12-16 19:21:59.662456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:15.326 [2024-12-16 19:21:59.662466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:15.326 [2024-12-16 19:21:59.662472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:15.326 [2024-12-16 19:21:59.662478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:15.326 [2024-12-16 19:21:59.662483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:15.327 [2024-12-16 19:21:59.662490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:15.327 [2024-12-16 19:21:59.662495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:15.327 [2024-12-16 19:21:59.662501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:15.327 [2024-12-16 19:21:59.662507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:15.327 [2024-12-16 19:21:59.662513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:15.327 [2024-12-16 19:21:59.662519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:15.327 [2024-12-16 19:21:59.662525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:15.327 [2024-12-16 19:21:59.662531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:15.327 [2024-12-16 19:21:59.662537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:15.327 [2024-12-16 19:21:59.662543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:15.327 [2024-12-16 19:21:59.662549] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:15.327 [2024-12-16 19:21:59.662555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:15.327 [2024-12-16 19:21:59.662564] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:15.327 [2024-12-16 19:21:59.662570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:15.327 [2024-12-16 19:21:59.662577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:15.327 [2024-12-16 19:21:59.662582] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:15.327 [2024-12-16 19:21:59.662588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.327 [2024-12-16 19:21:59.662594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:15.327 [2024-12-16 19:21:59.662600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:20:15.327 [2024-12-16 19:21:59.662607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.587 [2024-12-16 19:21:59.683243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.587 [2024-12-16 19:21:59.683267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:15.587 [2024-12-16 19:21:59.683277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.581 ms 00:20:15.587 [2024-12-16 19:21:59.683284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.587 [2024-12-16 19:21:59.683373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.587 [2024-12-16 19:21:59.683381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:15.587 [2024-12-16 19:21:59.683388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:15.587 [2024-12-16 19:21:59.683394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.587 [2024-12-16 19:21:59.707054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.587 [2024-12-16 19:21:59.707081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.587 [2024-12-16 19:21:59.707090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.643 ms 00:20:15.587 [2024-12-16 19:21:59.707095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.587 [2024-12-16 19:21:59.707138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.587 [2024-12-16 19:21:59.707145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.587 [2024-12-16 19:21:59.707152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:15.587 [2024-12-16 19:21:59.707157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.587 [2024-12-16 19:21:59.707450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.587 [2024-12-16 19:21:59.707465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.587 [2024-12-16 19:21:59.707476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:20:15.587 [2024-12-16 19:21:59.707481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.587 [2024-12-16 19:21:59.707580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.587 [2024-12-16 19:21:59.707587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.587 [2024-12-16 19:21:59.707594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:15.587 [2024-12-16 19:21:59.707600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.587 [2024-12-16 19:21:59.719054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.587 [2024-12-16 19:21:59.719078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.587 [2024-12-16 19:21:59.719086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.438 ms 00:20:15.587 [2024-12-16 19:21:59.719092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.587 [2024-12-16 19:21:59.735817] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:15.587 [2024-12-16 19:21:59.735854] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:15.587 [2024-12-16 19:21:59.735870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.587 [2024-12-16 19:21:59.735878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:15.587 [2024-12-16 19:21:59.735890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.704 ms 00:20:15.587 [2024-12-16 19:21:59.735902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.587 [2024-12-16 19:21:59.756745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.587 [2024-12-16 19:21:59.756771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:15.587 [2024-12-16 19:21:59.756783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.753 ms 00:20:15.587 [2024-12-16 19:21:59.756789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.587 [2024-12-16 19:21:59.765674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.587 [2024-12-16 19:21:59.765698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:15.587 [2024-12-16 19:21:59.765709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.828 ms 00:20:15.588 [2024-12-16 19:21:59.765714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.588 [2024-12-16 19:21:59.774275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.588 [2024-12-16 19:21:59.774298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:15.588 [2024-12-16 19:21:59.774307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.520 ms 00:20:15.588 [2024-12-16 19:21:59.774313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.588 [2024-12-16 19:21:59.774776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.588 [2024-12-16 19:21:59.774790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:15.588 [2024-12-16 19:21:59.774799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:20:15.588 [2024-12-16 19:21:59.774804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.588 [2024-12-16 19:21:59.818086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.588 [2024-12-16 19:21:59.818120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:15.588 [2024-12-16 19:21:59.818131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.264 ms 00:20:15.588 [2024-12-16 19:21:59.818138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.588 [2024-12-16 19:21:59.825833] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:15.588 [2024-12-16 19:21:59.836969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.588 [2024-12-16 19:21:59.837108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:15.588 [2024-12-16 19:21:59.837124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.765 ms 00:20:15.588 [2024-12-16 19:21:59.837132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.588 [2024-12-16 19:21:59.837216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.588 [2024-12-16 19:21:59.837226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:15.588 [2024-12-16 19:21:59.837233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:15.588 [2024-12-16 19:21:59.837240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.588 [2024-12-16 19:21:59.837277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.588 [2024-12-16 19:21:59.837286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:15.588 [2024-12-16 19:21:59.837292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:15.588 [2024-12-16 19:21:59.837301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.588 [2024-12-16 19:21:59.837319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.588 [2024-12-16 19:21:59.837326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:15.588 [2024-12-16 19:21:59.837332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:15.588 [2024-12-16 19:21:59.837340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.588 [2024-12-16 19:21:59.837364] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:15.588 [2024-12-16 19:21:59.837374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.588 [2024-12-16 19:21:59.837381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:15.588 [2024-12-16 19:21:59.837388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:15.588 [2024-12-16 19:21:59.837394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.588 [2024-12-16 19:21:59.855092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.588 [2024-12-16 19:21:59.855119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:15.588 [2024-12-16 19:21:59.855130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.678 ms 00:20:15.588 [2024-12-16 19:21:59.855137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.588 [2024-12-16 19:21:59.855217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.588 [2024-12-16 19:21:59.855226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:15.588 [2024-12-16 19:21:59.855234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:15.588 [2024-12-16 19:21:59.855242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.588 [2024-12-16 19:21:59.856126] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:15.588 [2024-12-16 19:21:59.858630] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 215.356 ms, result 0 00:20:15.588 [2024-12-16 19:21:59.859701] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:15.588 Some configs were skipped because the RPC state that can call them passed over. 00:20:15.588 19:21:59 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:15.849 [2024-12-16 19:22:00.066876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.849 [2024-12-16 19:22:00.066914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:15.849 [2024-12-16 19:22:00.066923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.402 ms 00:20:15.849 [2024-12-16 19:22:00.066931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.849 [2024-12-16 19:22:00.066955] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.482 ms, result 0 00:20:15.849 true 00:20:15.849 19:22:00 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:16.110 [2024-12-16 19:22:00.271304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.110 [2024-12-16 19:22:00.271335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:16.110 [2024-12-16 19:22:00.271345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.622 ms 00:20:16.110 [2024-12-16 19:22:00.271351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.110 [2024-12-16 19:22:00.271389] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.709 ms, result 0 00:20:16.110 true 00:20:16.110 19:22:00 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78464 00:20:16.110 19:22:00 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78464 ']' 00:20:16.110 19:22:00 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78464 00:20:16.110 19:22:00 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:16.110 19:22:00 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:16.110 19:22:00 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78464 00:20:16.110 19:22:00 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:16.110 19:22:00 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:16.110 19:22:00 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78464' 00:20:16.110 killing process with pid 78464 00:20:16.110 19:22:00 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78464 00:20:16.110 19:22:00 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78464 00:20:16.684 [2024-12-16 19:22:00.848445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.684 [2024-12-16 19:22:00.848656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:16.684 [2024-12-16 19:22:00.848710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:16.684 [2024-12-16 19:22:00.848731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.684 [2024-12-16 19:22:00.848764] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:16.684 [2024-12-16 19:22:00.850856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.684 [2024-12-16 19:22:00.850942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:16.684 [2024-12-16 19:22:00.850992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.059 ms 00:20:16.684 [2024-12-16 19:22:00.851009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.684 [2024-12-16 19:22:00.851248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.684 [2024-12-16 19:22:00.851621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:16.684 [2024-12-16 19:22:00.851642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:20:16.684 [2024-12-16 19:22:00.851650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.684 [2024-12-16 19:22:00.854826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.684 [2024-12-16 19:22:00.854909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:16.684 [2024-12-16 19:22:00.854985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.135 ms 00:20:16.684 [2024-12-16 19:22:00.855004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.684 [2024-12-16 19:22:00.860243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.684 [2024-12-16 19:22:00.860335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:16.684 [2024-12-16 19:22:00.860381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.176 ms 00:20:16.684 [2024-12-16 19:22:00.860398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.684 [2024-12-16 19:22:00.867748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.684 [2024-12-16 19:22:00.867776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:16.684 [2024-12-16 19:22:00.867786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.294 ms 00:20:16.684 [2024-12-16 19:22:00.867792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.684 [2024-12-16 19:22:00.874420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.684 [2024-12-16 19:22:00.874513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:16.684 [2024-12-16 19:22:00.874526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.597 ms 00:20:16.684 [2024-12-16 19:22:00.874532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.684 [2024-12-16 19:22:00.874637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.684 [2024-12-16 19:22:00.874644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:16.684 [2024-12-16 19:22:00.874652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:16.684 [2024-12-16 19:22:00.874658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.684 [2024-12-16 19:22:00.882636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.684 [2024-12-16 19:22:00.882660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:16.684 [2024-12-16 19:22:00.882668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.963 ms 00:20:16.684 [2024-12-16 19:22:00.882674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.684 [2024-12-16 19:22:00.889986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.684 [2024-12-16 19:22:00.890068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:16.684 [2024-12-16 19:22:00.890083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.281 ms 00:20:16.684 [2024-12-16 19:22:00.890089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.684 [2024-12-16 19:22:00.897151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.684 [2024-12-16 19:22:00.897239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:16.684 [2024-12-16 19:22:00.897252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.034 ms 00:20:16.684 [2024-12-16 19:22:00.897258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.684 [2024-12-16 19:22:00.904025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.684 [2024-12-16 19:22:00.904049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:16.684 [2024-12-16 19:22:00.904057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.720 ms 00:20:16.684 [2024-12-16 19:22:00.904063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.684 [2024-12-16 19:22:00.904096] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:16.684 [2024-12-16 19:22:00.904107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:16.684 [2024-12-16 19:22:00.904116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:16.684 [2024-12-16 19:22:00.904122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:16.684 [2024-12-16 19:22:00.904128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:16.684 [2024-12-16 19:22:00.904134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:16.684 [2024-12-16 19:22:00.904142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:16.685 [2024-12-16 19:22:00.904735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:16.686 [2024-12-16 19:22:00.904741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:16.686 [2024-12-16 19:22:00.904750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:16.686 [2024-12-16 19:22:00.904756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:16.686 [2024-12-16 19:22:00.904763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:16.686 [2024-12-16 19:22:00.904779] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:16.686 [2024-12-16 19:22:00.904790] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36f840fd-a489-4625-9f63-997690818b8f 00:20:16.686 [2024-12-16 19:22:00.904798] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:16.686 [2024-12-16 19:22:00.904805] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:16.686 [2024-12-16 19:22:00.904810] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:16.686 [2024-12-16 19:22:00.904817] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:16.686 [2024-12-16 19:22:00.904822] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:16.686 [2024-12-16 19:22:00.904829] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:16.686 [2024-12-16 19:22:00.904834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:16.686 [2024-12-16 19:22:00.904840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:16.686 [2024-12-16 19:22:00.904845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:16.686 [2024-12-16 19:22:00.904851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.686 [2024-12-16 19:22:00.904857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:16.686 [2024-12-16 19:22:00.904864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:20:16.686 [2024-12-16 19:22:00.904869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.686 [2024-12-16 19:22:00.914487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.686 [2024-12-16 19:22:00.914511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:16.686 [2024-12-16 19:22:00.914521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.598 ms 00:20:16.686 [2024-12-16 19:22:00.914527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.686 [2024-12-16 19:22:00.914811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.686 [2024-12-16 19:22:00.914823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:16.686 [2024-12-16 19:22:00.914833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:20:16.686 [2024-12-16 19:22:00.914839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.686 [2024-12-16 19:22:00.949649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.686 [2024-12-16 19:22:00.949675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:16.686 [2024-12-16 19:22:00.949684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.686 [2024-12-16 19:22:00.949690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.686 [2024-12-16 19:22:00.949763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.686 [2024-12-16 19:22:00.949771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:16.686 [2024-12-16 19:22:00.949781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.686 [2024-12-16 19:22:00.949787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.686 [2024-12-16 19:22:00.949819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.686 [2024-12-16 19:22:00.949826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:16.686 [2024-12-16 19:22:00.949834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.686 [2024-12-16 19:22:00.949839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.686 [2024-12-16 19:22:00.949855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.686 [2024-12-16 19:22:00.949861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:16.686 [2024-12-16 19:22:00.949868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.686 [2024-12-16 19:22:00.949874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.686 [2024-12-16 19:22:01.009646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.686 [2024-12-16 19:22:01.009678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:16.686 [2024-12-16 19:22:01.009688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.686 [2024-12-16 19:22:01.009694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.947 [2024-12-16 19:22:01.057555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.947 [2024-12-16 19:22:01.057584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:16.947 [2024-12-16 19:22:01.057593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.947 [2024-12-16 19:22:01.057601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.947 [2024-12-16 19:22:01.057659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.947 [2024-12-16 19:22:01.057667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:16.947 [2024-12-16 19:22:01.057676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.947 [2024-12-16 19:22:01.057683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.947 [2024-12-16 19:22:01.057707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.947 [2024-12-16 19:22:01.057714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:16.947 [2024-12-16 19:22:01.057721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.947 [2024-12-16 19:22:01.057727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.947 [2024-12-16 19:22:01.057797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.947 [2024-12-16 19:22:01.057804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:16.947 [2024-12-16 19:22:01.057811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.947 [2024-12-16 19:22:01.057817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.947 [2024-12-16 19:22:01.057846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.947 [2024-12-16 19:22:01.057853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:16.947 [2024-12-16 19:22:01.057860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.947 [2024-12-16 19:22:01.057866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.947 [2024-12-16 19:22:01.057899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.947 [2024-12-16 19:22:01.057905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:16.947 [2024-12-16 19:22:01.057914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.947 [2024-12-16 19:22:01.057921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.947 [2024-12-16 19:22:01.057957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.947 [2024-12-16 19:22:01.057965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:16.947 [2024-12-16 19:22:01.057972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.947 [2024-12-16 19:22:01.057978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.947 [2024-12-16 19:22:01.058082] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 209.616 ms, result 0 00:20:17.518 19:22:01 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:17.518 19:22:01 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:17.518 [2024-12-16 19:22:01.640734] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:17.518 [2024-12-16 19:22:01.640857] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78511 ] 00:20:17.518 [2024-12-16 19:22:01.796401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.780 [2024-12-16 19:22:01.880529] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.780 [2024-12-16 19:22:02.088526] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:17.780 [2024-12-16 19:22:02.088578] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:18.045 [2024-12-16 19:22:02.240354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.045 [2024-12-16 19:22:02.240388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:18.045 [2024-12-16 19:22:02.240398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:18.045 [2024-12-16 19:22:02.240404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.045 [2024-12-16 19:22:02.242471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.045 [2024-12-16 19:22:02.242499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:18.045 [2024-12-16 19:22:02.242507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.055 ms 00:20:18.045 [2024-12-16 19:22:02.242513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.045 [2024-12-16 19:22:02.242568] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:18.045 [2024-12-16 19:22:02.243077] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:18.045 [2024-12-16 19:22:02.243099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.045 [2024-12-16 19:22:02.243105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:18.045 [2024-12-16 19:22:02.243112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:20:18.045 [2024-12-16 19:22:02.243117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.045 [2024-12-16 19:22:02.244133] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:18.045 [2024-12-16 19:22:02.253580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.045 [2024-12-16 19:22:02.253606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:18.045 [2024-12-16 19:22:02.253615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.449 ms 00:20:18.045 [2024-12-16 19:22:02.253620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.045 [2024-12-16 19:22:02.253688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.046 [2024-12-16 19:22:02.253697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:18.046 [2024-12-16 19:22:02.253704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:18.046 [2024-12-16 19:22:02.253710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.046 [2024-12-16 19:22:02.257917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.046 [2024-12-16 19:22:02.257940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:18.046 [2024-12-16 19:22:02.257947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.178 ms 00:20:18.046 [2024-12-16 19:22:02.257953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.046 [2024-12-16 19:22:02.258021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.046 [2024-12-16 19:22:02.258028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:18.046 [2024-12-16 19:22:02.258034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:18.046 [2024-12-16 19:22:02.258040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.046 [2024-12-16 19:22:02.258058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.046 [2024-12-16 19:22:02.258064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:18.046 [2024-12-16 19:22:02.258070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:18.046 [2024-12-16 19:22:02.258075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.046 [2024-12-16 19:22:02.258091] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:18.046 [2024-12-16 19:22:02.260757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.046 [2024-12-16 19:22:02.260779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:18.046 [2024-12-16 19:22:02.260786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.669 ms 00:20:18.046 [2024-12-16 19:22:02.260792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.046 [2024-12-16 19:22:02.260820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.046 [2024-12-16 19:22:02.260831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:18.046 [2024-12-16 19:22:02.260837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:18.046 [2024-12-16 19:22:02.260843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.046 [2024-12-16 19:22:02.260858] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:18.046 [2024-12-16 19:22:02.260872] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:18.046 [2024-12-16 19:22:02.260897] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:18.046 [2024-12-16 19:22:02.260908] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:18.046 [2024-12-16 19:22:02.260989] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:18.046 [2024-12-16 19:22:02.260997] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:18.046 [2024-12-16 19:22:02.261005] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:18.046 [2024-12-16 19:22:02.261015] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:18.046 [2024-12-16 19:22:02.261022] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:18.046 [2024-12-16 19:22:02.261028] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:18.046 [2024-12-16 19:22:02.261033] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:18.046 [2024-12-16 19:22:02.261038] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:18.046 [2024-12-16 19:22:02.261043] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:18.046 [2024-12-16 19:22:02.261049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.046 [2024-12-16 19:22:02.261054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:18.046 [2024-12-16 19:22:02.261060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:20:18.046 [2024-12-16 19:22:02.261066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.046 [2024-12-16 19:22:02.261132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.046 [2024-12-16 19:22:02.261141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:18.046 [2024-12-16 19:22:02.261147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:18.046 [2024-12-16 19:22:02.261152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.046 [2024-12-16 19:22:02.261232] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:18.046 [2024-12-16 19:22:02.261240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:18.046 [2024-12-16 19:22:02.261248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:18.046 [2024-12-16 19:22:02.261254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.046 [2024-12-16 19:22:02.261260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:18.046 [2024-12-16 19:22:02.261265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:18.046 [2024-12-16 19:22:02.261270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:18.046 [2024-12-16 19:22:02.261275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:18.046 [2024-12-16 19:22:02.261280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:18.046 [2024-12-16 19:22:02.261285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:18.046 [2024-12-16 19:22:02.261290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:18.046 [2024-12-16 19:22:02.261300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:18.046 [2024-12-16 19:22:02.261305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:18.046 [2024-12-16 19:22:02.261311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:18.046 [2024-12-16 19:22:02.261316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:18.046 [2024-12-16 19:22:02.261321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.046 [2024-12-16 19:22:02.261326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:18.046 [2024-12-16 19:22:02.261331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:18.046 [2024-12-16 19:22:02.261336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.046 [2024-12-16 19:22:02.261341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:18.046 [2024-12-16 19:22:02.261346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:18.046 [2024-12-16 19:22:02.261351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:18.046 [2024-12-16 19:22:02.261356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:18.046 [2024-12-16 19:22:02.261361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:18.046 [2024-12-16 19:22:02.261365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:18.046 [2024-12-16 19:22:02.261370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:18.046 [2024-12-16 19:22:02.261375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:18.046 [2024-12-16 19:22:02.261380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:18.046 [2024-12-16 19:22:02.261385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:18.046 [2024-12-16 19:22:02.261390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:18.046 [2024-12-16 19:22:02.261395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:18.046 [2024-12-16 19:22:02.261400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:18.046 [2024-12-16 19:22:02.261405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:18.046 [2024-12-16 19:22:02.261409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:18.046 [2024-12-16 19:22:02.261414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:18.046 [2024-12-16 19:22:02.261419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:18.046 [2024-12-16 19:22:02.261424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:18.046 [2024-12-16 19:22:02.261429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:18.046 [2024-12-16 19:22:02.261434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:18.046 [2024-12-16 19:22:02.261438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.046 [2024-12-16 19:22:02.261444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:18.046 [2024-12-16 19:22:02.261449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:18.046 [2024-12-16 19:22:02.261453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.046 [2024-12-16 19:22:02.261459] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:18.046 [2024-12-16 19:22:02.261465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:18.046 [2024-12-16 19:22:02.261472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:18.046 [2024-12-16 19:22:02.261477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.046 [2024-12-16 19:22:02.261483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:18.046 [2024-12-16 19:22:02.261488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:18.046 [2024-12-16 19:22:02.261494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:18.046 [2024-12-16 19:22:02.261499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:18.046 [2024-12-16 19:22:02.261504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:18.046 [2024-12-16 19:22:02.261509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:18.046 [2024-12-16 19:22:02.261515] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:18.047 [2024-12-16 19:22:02.261522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:18.047 [2024-12-16 19:22:02.261528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:18.047 [2024-12-16 19:22:02.261533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:18.047 [2024-12-16 19:22:02.261538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:18.047 [2024-12-16 19:22:02.261543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:18.047 [2024-12-16 19:22:02.261549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:18.047 [2024-12-16 19:22:02.261554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:18.047 [2024-12-16 19:22:02.261560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:18.047 [2024-12-16 19:22:02.261565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:18.047 [2024-12-16 19:22:02.261570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:18.047 [2024-12-16 19:22:02.261575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:18.047 [2024-12-16 19:22:02.261580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:18.047 [2024-12-16 19:22:02.261585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:18.047 [2024-12-16 19:22:02.261591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:18.047 [2024-12-16 19:22:02.261596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:18.047 [2024-12-16 19:22:02.261601] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:18.047 [2024-12-16 19:22:02.261607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:18.047 [2024-12-16 19:22:02.261613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:18.047 [2024-12-16 19:22:02.261618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:18.047 [2024-12-16 19:22:02.261623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:18.047 [2024-12-16 19:22:02.261629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:18.047 [2024-12-16 19:22:02.261635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.047 [2024-12-16 19:22:02.261642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:18.047 [2024-12-16 19:22:02.261648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:20:18.047 [2024-12-16 19:22:02.261654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.047 [2024-12-16 19:22:02.282291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.047 [2024-12-16 19:22:02.282419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:18.047 [2024-12-16 19:22:02.282432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.598 ms 00:20:18.047 [2024-12-16 19:22:02.282438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.047 [2024-12-16 19:22:02.282535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.047 [2024-12-16 19:22:02.282543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:18.047 [2024-12-16 19:22:02.282550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:18.047 [2024-12-16 19:22:02.282555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.047 [2024-12-16 19:22:02.322233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.047 [2024-12-16 19:22:02.322262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:18.047 [2024-12-16 19:22:02.322273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.661 ms 00:20:18.047 [2024-12-16 19:22:02.322280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.047 [2024-12-16 19:22:02.322338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.047 [2024-12-16 19:22:02.322347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:18.047 [2024-12-16 19:22:02.322354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:18.047 [2024-12-16 19:22:02.322360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.047 [2024-12-16 19:22:02.322650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.047 [2024-12-16 19:22:02.322674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:18.047 [2024-12-16 19:22:02.322681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:20:18.047 [2024-12-16 19:22:02.322692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.047 [2024-12-16 19:22:02.322793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.047 [2024-12-16 19:22:02.322804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:18.047 [2024-12-16 19:22:02.322811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:18.047 [2024-12-16 19:22:02.322817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.047 [2024-12-16 19:22:02.333474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.047 [2024-12-16 19:22:02.333576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:18.047 [2024-12-16 19:22:02.333589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.641 ms 00:20:18.047 [2024-12-16 19:22:02.333595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.047 [2024-12-16 19:22:02.343422] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:18.047 [2024-12-16 19:22:02.343448] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:18.047 [2024-12-16 19:22:02.343457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.047 [2024-12-16 19:22:02.343464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:18.047 [2024-12-16 19:22:02.343470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.773 ms 00:20:18.047 [2024-12-16 19:22:02.343475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.047 [2024-12-16 19:22:02.362241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.047 [2024-12-16 19:22:02.362269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:18.047 [2024-12-16 19:22:02.362278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.704 ms 00:20:18.047 [2024-12-16 19:22:02.362284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.047 [2024-12-16 19:22:02.371212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.047 [2024-12-16 19:22:02.371238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:18.047 [2024-12-16 19:22:02.371246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.876 ms 00:20:18.047 [2024-12-16 19:22:02.371251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.047 [2024-12-16 19:22:02.380310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.047 [2024-12-16 19:22:02.380334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:18.047 [2024-12-16 19:22:02.380341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.020 ms 00:20:18.047 [2024-12-16 19:22:02.380347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.047 [2024-12-16 19:22:02.380805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.047 [2024-12-16 19:22:02.380825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:18.047 [2024-12-16 19:22:02.380832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:20:18.047 [2024-12-16 19:22:02.380837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.341 [2024-12-16 19:22:02.424616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.341 [2024-12-16 19:22:02.424654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:18.341 [2024-12-16 19:22:02.424665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.762 ms 00:20:18.341 [2024-12-16 19:22:02.424672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.341 [2024-12-16 19:22:02.432802] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:18.341 [2024-12-16 19:22:02.444404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.341 [2024-12-16 19:22:02.444434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:18.341 [2024-12-16 19:22:02.444443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.667 ms 00:20:18.341 [2024-12-16 19:22:02.444453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.341 [2024-12-16 19:22:02.444525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.341 [2024-12-16 19:22:02.444533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:18.341 [2024-12-16 19:22:02.444541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:18.341 [2024-12-16 19:22:02.444547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.341 [2024-12-16 19:22:02.444582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.341 [2024-12-16 19:22:02.444590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:18.341 [2024-12-16 19:22:02.444596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:18.341 [2024-12-16 19:22:02.444604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.341 [2024-12-16 19:22:02.444629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.341 [2024-12-16 19:22:02.444637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:18.341 [2024-12-16 19:22:02.444643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:18.341 [2024-12-16 19:22:02.444648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.341 [2024-12-16 19:22:02.444672] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:18.342 [2024-12-16 19:22:02.444680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.342 [2024-12-16 19:22:02.444685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:18.342 [2024-12-16 19:22:02.444691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:18.342 [2024-12-16 19:22:02.444697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.342 [2024-12-16 19:22:02.463197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.342 [2024-12-16 19:22:02.463224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:18.342 [2024-12-16 19:22:02.463233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.483 ms 00:20:18.342 [2024-12-16 19:22:02.463239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.342 [2024-12-16 19:22:02.463306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.342 [2024-12-16 19:22:02.463314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:18.342 [2024-12-16 19:22:02.463321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:18.342 [2024-12-16 19:22:02.463327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.342 [2024-12-16 19:22:02.463973] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:18.342 [2024-12-16 19:22:02.466184] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 223.398 ms, result 0 00:20:18.342 [2024-12-16 19:22:02.467547] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:18.342 [2024-12-16 19:22:02.478284] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:19.325  [2024-12-16T19:22:04.624Z] Copying: 21/256 [MB] (21 MBps) [2024-12-16T19:22:05.568Z] Copying: 34/256 [MB] (12 MBps) [2024-12-16T19:22:06.510Z] Copying: 44/256 [MB] (10 MBps) [2024-12-16T19:22:07.896Z] Copying: 62/256 [MB] (17 MBps) [2024-12-16T19:22:08.839Z] Copying: 86/256 [MB] (24 MBps) [2024-12-16T19:22:09.783Z] Copying: 101/256 [MB] (15 MBps) [2024-12-16T19:22:10.725Z] Copying: 125/256 [MB] (24 MBps) [2024-12-16T19:22:11.667Z] Copying: 140/256 [MB] (15 MBps) [2024-12-16T19:22:12.610Z] Copying: 151/256 [MB] (10 MBps) [2024-12-16T19:22:13.558Z] Copying: 161/256 [MB] (10 MBps) [2024-12-16T19:22:14.501Z] Copying: 171/256 [MB] (10 MBps) [2024-12-16T19:22:15.887Z] Copying: 182/256 [MB] (10 MBps) [2024-12-16T19:22:16.830Z] Copying: 192/256 [MB] (10 MBps) [2024-12-16T19:22:17.775Z] Copying: 203/256 [MB] (10 MBps) [2024-12-16T19:22:18.720Z] Copying: 213/256 [MB] (10 MBps) [2024-12-16T19:22:19.664Z] Copying: 229/256 [MB] (16 MBps) [2024-12-16T19:22:20.238Z] Copying: 246/256 [MB] (17 MBps) [2024-12-16T19:22:20.238Z] Copying: 256/256 [MB] (average 14 MBps)[2024-12-16 19:22:20.024257] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:35.884 [2024-12-16 19:22:20.034915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.884 [2024-12-16 19:22:20.034974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:35.884 [2024-12-16 19:22:20.034995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:35.884 [2024-12-16 19:22:20.035003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.884 [2024-12-16 19:22:20.035029] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:35.884 [2024-12-16 19:22:20.038008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.884 [2024-12-16 19:22:20.038050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:35.884 [2024-12-16 19:22:20.038061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.964 ms 00:20:35.884 [2024-12-16 19:22:20.038070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.884 [2024-12-16 19:22:20.038354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.884 [2024-12-16 19:22:20.038393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:35.884 [2024-12-16 19:22:20.038403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:20:35.884 [2024-12-16 19:22:20.038412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.884 [2024-12-16 19:22:20.042094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.884 [2024-12-16 19:22:20.042116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:35.884 [2024-12-16 19:22:20.042127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.662 ms 00:20:35.884 [2024-12-16 19:22:20.042135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.884 [2024-12-16 19:22:20.049002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.884 [2024-12-16 19:22:20.049049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:35.884 [2024-12-16 19:22:20.049061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.848 ms 00:20:35.884 [2024-12-16 19:22:20.049069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.884 [2024-12-16 19:22:20.075768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.884 [2024-12-16 19:22:20.075824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:35.884 [2024-12-16 19:22:20.075837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.616 ms 00:20:35.884 [2024-12-16 19:22:20.075845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.884 [2024-12-16 19:22:20.093157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.884 [2024-12-16 19:22:20.093224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:35.884 [2024-12-16 19:22:20.093244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.245 ms 00:20:35.884 [2024-12-16 19:22:20.093252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.884 [2024-12-16 19:22:20.093413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.884 [2024-12-16 19:22:20.093426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:35.884 [2024-12-16 19:22:20.093445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:20:35.884 [2024-12-16 19:22:20.093453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.884 [2024-12-16 19:22:20.119881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.884 [2024-12-16 19:22:20.119930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:35.884 [2024-12-16 19:22:20.119941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.410 ms 00:20:35.884 [2024-12-16 19:22:20.119948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.884 [2024-12-16 19:22:20.146359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.884 [2024-12-16 19:22:20.146423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:35.884 [2024-12-16 19:22:20.146435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.345 ms 00:20:35.884 [2024-12-16 19:22:20.146442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.884 [2024-12-16 19:22:20.171598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.884 [2024-12-16 19:22:20.171651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:35.884 [2024-12-16 19:22:20.171662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.072 ms 00:20:35.884 [2024-12-16 19:22:20.171669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.884 [2024-12-16 19:22:20.197251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.884 [2024-12-16 19:22:20.197301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:35.884 [2024-12-16 19:22:20.197312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.500 ms 00:20:35.884 [2024-12-16 19:22:20.197319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.884 [2024-12-16 19:22:20.197383] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:35.884 [2024-12-16 19:22:20.197399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:35.884 [2024-12-16 19:22:20.197591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.197995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.198003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.198010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.198018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.198025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.198034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.198042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.198049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.198056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.198063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.198071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.198078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.198086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.198107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.198114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.198122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.198129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.198136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.198144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.198151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:35.885 [2024-12-16 19:22:20.198167] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:35.885 [2024-12-16 19:22:20.198192] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36f840fd-a489-4625-9f63-997690818b8f 00:20:35.885 [2024-12-16 19:22:20.198200] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:35.885 [2024-12-16 19:22:20.198208] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:35.885 [2024-12-16 19:22:20.198215] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:35.885 [2024-12-16 19:22:20.198223] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:35.885 [2024-12-16 19:22:20.198231] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:35.885 [2024-12-16 19:22:20.198239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:35.885 [2024-12-16 19:22:20.198250] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:35.885 [2024-12-16 19:22:20.198257] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:35.885 [2024-12-16 19:22:20.198263] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:35.885 [2024-12-16 19:22:20.198271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.885 [2024-12-16 19:22:20.198278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:35.885 [2024-12-16 19:22:20.198288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.889 ms 00:20:35.885 [2024-12-16 19:22:20.198297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.885 [2024-12-16 19:22:20.211968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.885 [2024-12-16 19:22:20.212015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:35.885 [2024-12-16 19:22:20.212026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.649 ms 00:20:35.885 [2024-12-16 19:22:20.212035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.885 [2024-12-16 19:22:20.212464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.885 [2024-12-16 19:22:20.212487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:35.885 [2024-12-16 19:22:20.212497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:20:35.885 [2024-12-16 19:22:20.212505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.147 [2024-12-16 19:22:20.251520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.147 [2024-12-16 19:22:20.251570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:36.147 [2024-12-16 19:22:20.251581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.147 [2024-12-16 19:22:20.251595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.147 [2024-12-16 19:22:20.251693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.147 [2024-12-16 19:22:20.251704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:36.147 [2024-12-16 19:22:20.251715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.147 [2024-12-16 19:22:20.251723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.147 [2024-12-16 19:22:20.251776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.147 [2024-12-16 19:22:20.251786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:36.147 [2024-12-16 19:22:20.251794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.147 [2024-12-16 19:22:20.251801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.147 [2024-12-16 19:22:20.251822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.147 [2024-12-16 19:22:20.251831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:36.147 [2024-12-16 19:22:20.251838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.147 [2024-12-16 19:22:20.251846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.147 [2024-12-16 19:22:20.336554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.147 [2024-12-16 19:22:20.336610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:36.147 [2024-12-16 19:22:20.336623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.147 [2024-12-16 19:22:20.336631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.147 [2024-12-16 19:22:20.407512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.147 [2024-12-16 19:22:20.407569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:36.147 [2024-12-16 19:22:20.407581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.147 [2024-12-16 19:22:20.407590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.147 [2024-12-16 19:22:20.407693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.147 [2024-12-16 19:22:20.407704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:36.147 [2024-12-16 19:22:20.407720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.147 [2024-12-16 19:22:20.407728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.147 [2024-12-16 19:22:20.407761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.147 [2024-12-16 19:22:20.407773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:36.147 [2024-12-16 19:22:20.407782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.147 [2024-12-16 19:22:20.407790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.147 [2024-12-16 19:22:20.407891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.147 [2024-12-16 19:22:20.407902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:36.147 [2024-12-16 19:22:20.407910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.147 [2024-12-16 19:22:20.407919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.147 [2024-12-16 19:22:20.407955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.147 [2024-12-16 19:22:20.407966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:36.147 [2024-12-16 19:22:20.407978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.147 [2024-12-16 19:22:20.407987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.147 [2024-12-16 19:22:20.408033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.147 [2024-12-16 19:22:20.408044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:36.147 [2024-12-16 19:22:20.408052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.147 [2024-12-16 19:22:20.408061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.147 [2024-12-16 19:22:20.408109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.147 [2024-12-16 19:22:20.408125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:36.147 [2024-12-16 19:22:20.408133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.147 [2024-12-16 19:22:20.408143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.147 [2024-12-16 19:22:20.408329] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 373.411 ms, result 0 00:20:37.090 00:20:37.090 00:20:37.090 19:22:21 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:37.090 19:22:21 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:37.663 19:22:21 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:37.663 [2024-12-16 19:22:21.816033] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:37.663 [2024-12-16 19:22:21.816203] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78717 ] 00:20:37.663 [2024-12-16 19:22:21.978564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.924 [2024-12-16 19:22:22.098934] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.210 [2024-12-16 19:22:22.393530] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:38.210 [2024-12-16 19:22:22.393624] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:38.477 [2024-12-16 19:22:22.555868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.477 [2024-12-16 19:22:22.555934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:38.477 [2024-12-16 19:22:22.555950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:38.477 [2024-12-16 19:22:22.555959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.477 [2024-12-16 19:22:22.559114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.478 [2024-12-16 19:22:22.559349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:38.478 [2024-12-16 19:22:22.559371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.133 ms 00:20:38.478 [2024-12-16 19:22:22.559380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.478 [2024-12-16 19:22:22.559605] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:38.478 [2024-12-16 19:22:22.560549] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:38.478 [2024-12-16 19:22:22.560603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.478 [2024-12-16 19:22:22.560611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:38.478 [2024-12-16 19:22:22.560621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:20:38.478 [2024-12-16 19:22:22.560629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.478 [2024-12-16 19:22:22.563121] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:38.478 [2024-12-16 19:22:22.581815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.478 [2024-12-16 19:22:22.581864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:38.478 [2024-12-16 19:22:22.581879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.701 ms 00:20:38.478 [2024-12-16 19:22:22.581887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.478 [2024-12-16 19:22:22.582002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.478 [2024-12-16 19:22:22.582016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:38.478 [2024-12-16 19:22:22.582026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:38.478 [2024-12-16 19:22:22.582035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.478 [2024-12-16 19:22:22.589894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.478 [2024-12-16 19:22:22.590071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:38.478 [2024-12-16 19:22:22.590089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.814 ms 00:20:38.478 [2024-12-16 19:22:22.590096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.478 [2024-12-16 19:22:22.590225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.478 [2024-12-16 19:22:22.590237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:38.478 [2024-12-16 19:22:22.590247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:38.478 [2024-12-16 19:22:22.590256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.478 [2024-12-16 19:22:22.590288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.478 [2024-12-16 19:22:22.590297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:38.478 [2024-12-16 19:22:22.590306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:38.478 [2024-12-16 19:22:22.590313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.478 [2024-12-16 19:22:22.590336] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:38.478 [2024-12-16 19:22:22.594289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.478 [2024-12-16 19:22:22.594337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:38.478 [2024-12-16 19:22:22.594349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.960 ms 00:20:38.478 [2024-12-16 19:22:22.594357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.478 [2024-12-16 19:22:22.594444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.478 [2024-12-16 19:22:22.594455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:38.478 [2024-12-16 19:22:22.594465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:38.478 [2024-12-16 19:22:22.594494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.478 [2024-12-16 19:22:22.594521] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:38.478 [2024-12-16 19:22:22.594544] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:38.478 [2024-12-16 19:22:22.594581] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:38.478 [2024-12-16 19:22:22.594597] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:38.479 [2024-12-16 19:22:22.594702] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:38.479 [2024-12-16 19:22:22.594714] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:38.479 [2024-12-16 19:22:22.594725] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:38.479 [2024-12-16 19:22:22.594738] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:38.479 [2024-12-16 19:22:22.594747] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:38.479 [2024-12-16 19:22:22.594757] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:38.479 [2024-12-16 19:22:22.594764] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:38.479 [2024-12-16 19:22:22.594771] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:38.479 [2024-12-16 19:22:22.594779] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:38.479 [2024-12-16 19:22:22.594787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.479 [2024-12-16 19:22:22.594794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:38.479 [2024-12-16 19:22:22.594802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:20:38.479 [2024-12-16 19:22:22.594809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.479 [2024-12-16 19:22:22.594898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.479 [2024-12-16 19:22:22.594910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:38.479 [2024-12-16 19:22:22.594917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:38.479 [2024-12-16 19:22:22.594925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.479 [2024-12-16 19:22:22.595025] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:38.479 [2024-12-16 19:22:22.595035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:38.479 [2024-12-16 19:22:22.595044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:38.479 [2024-12-16 19:22:22.595051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.479 [2024-12-16 19:22:22.595059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:38.479 [2024-12-16 19:22:22.595067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:38.479 [2024-12-16 19:22:22.595074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:38.479 [2024-12-16 19:22:22.595082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:38.479 [2024-12-16 19:22:22.595089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:38.479 [2024-12-16 19:22:22.595096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:38.479 [2024-12-16 19:22:22.595103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:38.479 [2024-12-16 19:22:22.595118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:38.479 [2024-12-16 19:22:22.595126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:38.479 [2024-12-16 19:22:22.595136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:38.479 [2024-12-16 19:22:22.595142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:38.479 [2024-12-16 19:22:22.595149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.479 [2024-12-16 19:22:22.595156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:38.479 [2024-12-16 19:22:22.595163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:38.479 [2024-12-16 19:22:22.595197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.479 [2024-12-16 19:22:22.595205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:38.479 [2024-12-16 19:22:22.595213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:38.479 [2024-12-16 19:22:22.595220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:38.479 [2024-12-16 19:22:22.595227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:38.479 [2024-12-16 19:22:22.595233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:38.479 [2024-12-16 19:22:22.595240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:38.479 [2024-12-16 19:22:22.595248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:38.479 [2024-12-16 19:22:22.595255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:38.479 [2024-12-16 19:22:22.595261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:38.479 [2024-12-16 19:22:22.595269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:38.479 [2024-12-16 19:22:22.595276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:38.479 [2024-12-16 19:22:22.595284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:38.479 [2024-12-16 19:22:22.595292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:38.479 [2024-12-16 19:22:22.595299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:38.479 [2024-12-16 19:22:22.595306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:38.479 [2024-12-16 19:22:22.595316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:38.479 [2024-12-16 19:22:22.595323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:38.479 [2024-12-16 19:22:22.595330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:38.480 [2024-12-16 19:22:22.595337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:38.480 [2024-12-16 19:22:22.595344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:38.480 [2024-12-16 19:22:22.595351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.480 [2024-12-16 19:22:22.595358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:38.480 [2024-12-16 19:22:22.595365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:38.480 [2024-12-16 19:22:22.595372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.480 [2024-12-16 19:22:22.595378] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:38.480 [2024-12-16 19:22:22.595386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:38.480 [2024-12-16 19:22:22.595397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:38.480 [2024-12-16 19:22:22.595405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.480 [2024-12-16 19:22:22.595414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:38.480 [2024-12-16 19:22:22.595421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:38.480 [2024-12-16 19:22:22.595428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:38.480 [2024-12-16 19:22:22.595435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:38.480 [2024-12-16 19:22:22.595441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:38.480 [2024-12-16 19:22:22.595448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:38.480 [2024-12-16 19:22:22.595457] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:38.480 [2024-12-16 19:22:22.595467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:38.480 [2024-12-16 19:22:22.595476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:38.480 [2024-12-16 19:22:22.595483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:38.480 [2024-12-16 19:22:22.595490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:38.480 [2024-12-16 19:22:22.595497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:38.480 [2024-12-16 19:22:22.595505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:38.480 [2024-12-16 19:22:22.595512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:38.480 [2024-12-16 19:22:22.595519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:38.480 [2024-12-16 19:22:22.595527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:38.480 [2024-12-16 19:22:22.595534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:38.480 [2024-12-16 19:22:22.595542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:38.480 [2024-12-16 19:22:22.595549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:38.480 [2024-12-16 19:22:22.595556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:38.480 [2024-12-16 19:22:22.595564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:38.480 [2024-12-16 19:22:22.595571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:38.480 [2024-12-16 19:22:22.595578] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:38.480 [2024-12-16 19:22:22.595586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:38.480 [2024-12-16 19:22:22.595594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:38.480 [2024-12-16 19:22:22.595602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:38.480 [2024-12-16 19:22:22.595610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:38.481 [2024-12-16 19:22:22.595617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:38.481 [2024-12-16 19:22:22.595624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.481 [2024-12-16 19:22:22.595641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:38.481 [2024-12-16 19:22:22.595650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:20:38.481 [2024-12-16 19:22:22.595657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.481 [2024-12-16 19:22:22.626871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.481 [2024-12-16 19:22:22.626919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:38.481 [2024-12-16 19:22:22.626930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.160 ms 00:20:38.481 [2024-12-16 19:22:22.626938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.481 [2024-12-16 19:22:22.627071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.481 [2024-12-16 19:22:22.627082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:38.481 [2024-12-16 19:22:22.627091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:38.481 [2024-12-16 19:22:22.627099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.481 [2024-12-16 19:22:22.671722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.481 [2024-12-16 19:22:22.671772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:38.481 [2024-12-16 19:22:22.671790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.601 ms 00:20:38.481 [2024-12-16 19:22:22.671800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.481 [2024-12-16 19:22:22.671909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.481 [2024-12-16 19:22:22.671922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:38.481 [2024-12-16 19:22:22.671931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:38.481 [2024-12-16 19:22:22.671940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.481 [2024-12-16 19:22:22.672473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.481 [2024-12-16 19:22:22.672504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:38.481 [2024-12-16 19:22:22.672515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:20:38.481 [2024-12-16 19:22:22.672529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.481 [2024-12-16 19:22:22.672684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.481 [2024-12-16 19:22:22.672694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:38.481 [2024-12-16 19:22:22.672703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:20:38.481 [2024-12-16 19:22:22.672710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.481 [2024-12-16 19:22:22.688695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.481 [2024-12-16 19:22:22.688737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:38.481 [2024-12-16 19:22:22.688748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.963 ms 00:20:38.481 [2024-12-16 19:22:22.688757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.481 [2024-12-16 19:22:22.702807] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:38.481 [2024-12-16 19:22:22.703004] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:38.481 [2024-12-16 19:22:22.703025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.481 [2024-12-16 19:22:22.703034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:38.481 [2024-12-16 19:22:22.703044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.153 ms 00:20:38.481 [2024-12-16 19:22:22.703051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.481 [2024-12-16 19:22:22.728996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.481 [2024-12-16 19:22:22.729044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:38.481 [2024-12-16 19:22:22.729056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.760 ms 00:20:38.481 [2024-12-16 19:22:22.729065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.481 [2024-12-16 19:22:22.742018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.481 [2024-12-16 19:22:22.742062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:38.481 [2024-12-16 19:22:22.742073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.863 ms 00:20:38.481 [2024-12-16 19:22:22.742081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.481 [2024-12-16 19:22:22.754620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.481 [2024-12-16 19:22:22.754661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:38.481 [2024-12-16 19:22:22.754673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.456 ms 00:20:38.481 [2024-12-16 19:22:22.754680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.481 [2024-12-16 19:22:22.755347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.481 [2024-12-16 19:22:22.755369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:38.481 [2024-12-16 19:22:22.755379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:20:38.481 [2024-12-16 19:22:22.755387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.481 [2024-12-16 19:22:22.820794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.481 [2024-12-16 19:22:22.820994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:38.481 [2024-12-16 19:22:22.821019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.377 ms 00:20:38.481 [2024-12-16 19:22:22.821029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.743 [2024-12-16 19:22:22.832997] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:38.743 [2024-12-16 19:22:22.851840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.743 [2024-12-16 19:22:22.852047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:38.743 [2024-12-16 19:22:22.852077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.711 ms 00:20:38.743 [2024-12-16 19:22:22.852086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.743 [2024-12-16 19:22:22.852218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.743 [2024-12-16 19:22:22.852232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:38.743 [2024-12-16 19:22:22.852243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:38.743 [2024-12-16 19:22:22.852251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.743 [2024-12-16 19:22:22.852308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.743 [2024-12-16 19:22:22.852318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:38.743 [2024-12-16 19:22:22.852331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:38.743 [2024-12-16 19:22:22.852341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.743 [2024-12-16 19:22:22.852368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.743 [2024-12-16 19:22:22.852378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:38.743 [2024-12-16 19:22:22.852387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:38.743 [2024-12-16 19:22:22.852395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.743 [2024-12-16 19:22:22.852434] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:38.743 [2024-12-16 19:22:22.852445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.743 [2024-12-16 19:22:22.852453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:38.743 [2024-12-16 19:22:22.852461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:38.743 [2024-12-16 19:22:22.852469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.743 [2024-12-16 19:22:22.878237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.743 [2024-12-16 19:22:22.878416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:38.743 [2024-12-16 19:22:22.878438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.743 ms 00:20:38.743 [2024-12-16 19:22:22.878448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.743 [2024-12-16 19:22:22.878579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.743 [2024-12-16 19:22:22.878591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:38.743 [2024-12-16 19:22:22.878603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:38.743 [2024-12-16 19:22:22.878615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.743 [2024-12-16 19:22:22.879685] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:38.743 [2024-12-16 19:22:22.883063] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 323.495 ms, result 0 00:20:38.743 [2024-12-16 19:22:22.884390] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:38.743 [2024-12-16 19:22:22.897769] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:39.004  [2024-12-16T19:22:23.358Z] Copying: 4096/4096 [kB] (average 12 MBps)[2024-12-16 19:22:23.231461] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:39.004 [2024-12-16 19:22:23.240939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.004 [2024-12-16 19:22:23.240990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:39.004 [2024-12-16 19:22:23.241003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:39.004 [2024-12-16 19:22:23.241012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.004 [2024-12-16 19:22:23.241036] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:39.004 [2024-12-16 19:22:23.243972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.004 [2024-12-16 19:22:23.244007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:39.004 [2024-12-16 19:22:23.244018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.920 ms 00:20:39.004 [2024-12-16 19:22:23.244027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.004 [2024-12-16 19:22:23.246870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.004 [2024-12-16 19:22:23.247031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:39.004 [2024-12-16 19:22:23.247061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.816 ms 00:20:39.004 [2024-12-16 19:22:23.247073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.004 [2024-12-16 19:22:23.251484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.004 [2024-12-16 19:22:23.251520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:39.004 [2024-12-16 19:22:23.251529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.389 ms 00:20:39.004 [2024-12-16 19:22:23.251537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.004 [2024-12-16 19:22:23.258426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.004 [2024-12-16 19:22:23.258462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:39.004 [2024-12-16 19:22:23.258473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.857 ms 00:20:39.004 [2024-12-16 19:22:23.258480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.004 [2024-12-16 19:22:23.283198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.004 [2024-12-16 19:22:23.283241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:39.004 [2024-12-16 19:22:23.283253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.673 ms 00:20:39.004 [2024-12-16 19:22:23.283260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.004 [2024-12-16 19:22:23.299546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.004 [2024-12-16 19:22:23.299593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:39.004 [2024-12-16 19:22:23.299605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.215 ms 00:20:39.004 [2024-12-16 19:22:23.299613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.004 [2024-12-16 19:22:23.299764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.004 [2024-12-16 19:22:23.299776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:39.004 [2024-12-16 19:22:23.299794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:20:39.004 [2024-12-16 19:22:23.299801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.004 [2024-12-16 19:22:23.325234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.004 [2024-12-16 19:22:23.325403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:39.004 [2024-12-16 19:22:23.325422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.416 ms 00:20:39.004 [2024-12-16 19:22:23.325429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.004 [2024-12-16 19:22:23.350520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.004 [2024-12-16 19:22:23.350558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:39.004 [2024-12-16 19:22:23.350570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.960 ms 00:20:39.004 [2024-12-16 19:22:23.350579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.267 [2024-12-16 19:22:23.379872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.267 [2024-12-16 19:22:23.379926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:39.267 [2024-12-16 19:22:23.379942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.244 ms 00:20:39.267 [2024-12-16 19:22:23.379952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.267 [2024-12-16 19:22:23.404831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.267 [2024-12-16 19:22:23.404873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:39.267 [2024-12-16 19:22:23.404884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.764 ms 00:20:39.267 [2024-12-16 19:22:23.404892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.267 [2024-12-16 19:22:23.404938] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:39.267 [2024-12-16 19:22:23.404954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.404966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.404974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.404981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.404990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.404998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:39.267 [2024-12-16 19:22:23.405368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:39.268 [2024-12-16 19:22:23.405752] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:39.268 [2024-12-16 19:22:23.405760] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36f840fd-a489-4625-9f63-997690818b8f 00:20:39.268 [2024-12-16 19:22:23.405768] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:39.268 [2024-12-16 19:22:23.405776] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:39.268 [2024-12-16 19:22:23.405784] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:39.268 [2024-12-16 19:22:23.405792] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:39.268 [2024-12-16 19:22:23.405799] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:39.268 [2024-12-16 19:22:23.405810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:39.268 [2024-12-16 19:22:23.405817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:39.268 [2024-12-16 19:22:23.405824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:39.268 [2024-12-16 19:22:23.405830] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:39.268 [2024-12-16 19:22:23.405837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.268 [2024-12-16 19:22:23.405846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:39.268 [2024-12-16 19:22:23.405855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.901 ms 00:20:39.268 [2024-12-16 19:22:23.405863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.268 [2024-12-16 19:22:23.419377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.268 [2024-12-16 19:22:23.419413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:39.268 [2024-12-16 19:22:23.419424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.482 ms 00:20:39.268 [2024-12-16 19:22:23.419438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.268 [2024-12-16 19:22:23.419832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.268 [2024-12-16 19:22:23.419842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:39.268 [2024-12-16 19:22:23.419851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:20:39.268 [2024-12-16 19:22:23.419859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.268 [2024-12-16 19:22:23.458619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.268 [2024-12-16 19:22:23.458662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:39.268 [2024-12-16 19:22:23.458678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.268 [2024-12-16 19:22:23.458686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.268 [2024-12-16 19:22:23.458763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.268 [2024-12-16 19:22:23.458773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:39.268 [2024-12-16 19:22:23.458781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.268 [2024-12-16 19:22:23.458788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.268 [2024-12-16 19:22:23.458837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.268 [2024-12-16 19:22:23.458847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:39.268 [2024-12-16 19:22:23.458855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.268 [2024-12-16 19:22:23.458867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.268 [2024-12-16 19:22:23.458885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.268 [2024-12-16 19:22:23.458893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:39.268 [2024-12-16 19:22:23.458902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.268 [2024-12-16 19:22:23.458909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.268 [2024-12-16 19:22:23.543853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.268 [2024-12-16 19:22:23.543898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:39.268 [2024-12-16 19:22:23.543909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.268 [2024-12-16 19:22:23.543923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.268 [2024-12-16 19:22:23.612489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.268 [2024-12-16 19:22:23.612537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:39.268 [2024-12-16 19:22:23.612549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.268 [2024-12-16 19:22:23.612557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.268 [2024-12-16 19:22:23.612612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.268 [2024-12-16 19:22:23.612622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:39.268 [2024-12-16 19:22:23.612631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.268 [2024-12-16 19:22:23.612640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.269 [2024-12-16 19:22:23.612680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.269 [2024-12-16 19:22:23.612689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:39.269 [2024-12-16 19:22:23.612697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.269 [2024-12-16 19:22:23.612707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.269 [2024-12-16 19:22:23.612807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.269 [2024-12-16 19:22:23.612817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:39.269 [2024-12-16 19:22:23.612827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.269 [2024-12-16 19:22:23.612834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.269 [2024-12-16 19:22:23.612868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.269 [2024-12-16 19:22:23.612881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:39.269 [2024-12-16 19:22:23.612890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.269 [2024-12-16 19:22:23.612898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.269 [2024-12-16 19:22:23.612944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.269 [2024-12-16 19:22:23.612954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:39.269 [2024-12-16 19:22:23.612962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.269 [2024-12-16 19:22:23.612970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.269 [2024-12-16 19:22:23.613022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.269 [2024-12-16 19:22:23.613034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:39.269 [2024-12-16 19:22:23.613042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.269 [2024-12-16 19:22:23.613050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.269 [2024-12-16 19:22:23.613240] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 372.260 ms, result 0 00:20:40.212 00:20:40.212 00:20:40.212 19:22:24 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78753 00:20:40.212 19:22:24 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78753 00:20:40.212 19:22:24 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78753 ']' 00:20:40.212 19:22:24 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.212 19:22:24 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:40.212 19:22:24 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.212 19:22:24 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.212 19:22:24 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.212 19:22:24 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:40.212 [2024-12-16 19:22:24.481143] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:40.212 [2024-12-16 19:22:24.481300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78753 ] 00:20:40.472 [2024-12-16 19:22:24.645067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.472 [2024-12-16 19:22:24.761804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.415 19:22:25 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.415 19:22:25 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:41.415 19:22:25 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:41.415 [2024-12-16 19:22:25.756231] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:41.415 [2024-12-16 19:22:25.756306] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:41.677 [2024-12-16 19:22:25.935168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.677 [2024-12-16 19:22:25.935237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:41.677 [2024-12-16 19:22:25.935255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:41.677 [2024-12-16 19:22:25.935264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.677 [2024-12-16 19:22:25.938329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.677 [2024-12-16 19:22:25.938372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:41.677 [2024-12-16 19:22:25.938396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.043 ms 00:20:41.677 [2024-12-16 19:22:25.938405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.677 [2024-12-16 19:22:25.938528] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:41.677 [2024-12-16 19:22:25.939237] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:41.677 [2024-12-16 19:22:25.939265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.677 [2024-12-16 19:22:25.939274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:41.677 [2024-12-16 19:22:25.939286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.750 ms 00:20:41.677 [2024-12-16 19:22:25.939294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.677 [2024-12-16 19:22:25.941115] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:41.677 [2024-12-16 19:22:25.955543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.677 [2024-12-16 19:22:25.955591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:41.677 [2024-12-16 19:22:25.955605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.433 ms 00:20:41.677 [2024-12-16 19:22:25.955616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.677 [2024-12-16 19:22:25.955734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.677 [2024-12-16 19:22:25.955748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:41.677 [2024-12-16 19:22:25.955760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:41.677 [2024-12-16 19:22:25.955770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.677 [2024-12-16 19:22:25.963763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.677 [2024-12-16 19:22:25.963808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:41.677 [2024-12-16 19:22:25.963818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.938 ms 00:20:41.677 [2024-12-16 19:22:25.963828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.677 [2024-12-16 19:22:25.963957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.677 [2024-12-16 19:22:25.963969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:41.677 [2024-12-16 19:22:25.963979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:41.677 [2024-12-16 19:22:25.963992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.677 [2024-12-16 19:22:25.964022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.677 [2024-12-16 19:22:25.964034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:41.677 [2024-12-16 19:22:25.964042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:41.677 [2024-12-16 19:22:25.964052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.677 [2024-12-16 19:22:25.964078] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:41.677 [2024-12-16 19:22:25.968227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.677 [2024-12-16 19:22:25.968262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:41.677 [2024-12-16 19:22:25.968276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.153 ms 00:20:41.677 [2024-12-16 19:22:25.968284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.677 [2024-12-16 19:22:25.968368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.677 [2024-12-16 19:22:25.968379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:41.677 [2024-12-16 19:22:25.968391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:41.677 [2024-12-16 19:22:25.968403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.677 [2024-12-16 19:22:25.968426] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:41.677 [2024-12-16 19:22:25.968449] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:41.677 [2024-12-16 19:22:25.968498] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:41.677 [2024-12-16 19:22:25.968516] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:41.677 [2024-12-16 19:22:25.968624] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:41.677 [2024-12-16 19:22:25.968635] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:41.677 [2024-12-16 19:22:25.968651] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:41.677 [2024-12-16 19:22:25.968662] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:41.677 [2024-12-16 19:22:25.968672] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:41.677 [2024-12-16 19:22:25.968681] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:41.677 [2024-12-16 19:22:25.968690] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:41.677 [2024-12-16 19:22:25.968698] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:41.677 [2024-12-16 19:22:25.968709] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:41.677 [2024-12-16 19:22:25.968717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.677 [2024-12-16 19:22:25.968726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:41.677 [2024-12-16 19:22:25.968734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:20:41.677 [2024-12-16 19:22:25.968743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.677 [2024-12-16 19:22:25.968832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.677 [2024-12-16 19:22:25.968842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:41.677 [2024-12-16 19:22:25.968849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:41.677 [2024-12-16 19:22:25.968858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.677 [2024-12-16 19:22:25.968959] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:41.677 [2024-12-16 19:22:25.968981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:41.677 [2024-12-16 19:22:25.968990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:41.677 [2024-12-16 19:22:25.969001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.677 [2024-12-16 19:22:25.969009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:41.677 [2024-12-16 19:22:25.969021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:41.677 [2024-12-16 19:22:25.969028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:41.677 [2024-12-16 19:22:25.969040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:41.677 [2024-12-16 19:22:25.969047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:41.677 [2024-12-16 19:22:25.969056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:41.677 [2024-12-16 19:22:25.969063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:41.677 [2024-12-16 19:22:25.969071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:41.677 [2024-12-16 19:22:25.969078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:41.677 [2024-12-16 19:22:25.969087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:41.677 [2024-12-16 19:22:25.969094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:41.677 [2024-12-16 19:22:25.969102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.677 [2024-12-16 19:22:25.969109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:41.678 [2024-12-16 19:22:25.969118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:41.678 [2024-12-16 19:22:25.969131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.678 [2024-12-16 19:22:25.969140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:41.678 [2024-12-16 19:22:25.969147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:41.678 [2024-12-16 19:22:25.969155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:41.678 [2024-12-16 19:22:25.969161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:41.678 [2024-12-16 19:22:25.969187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:41.678 [2024-12-16 19:22:25.969195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:41.678 [2024-12-16 19:22:25.969203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:41.678 [2024-12-16 19:22:25.969210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:41.678 [2024-12-16 19:22:25.969218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:41.678 [2024-12-16 19:22:25.969225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:41.678 [2024-12-16 19:22:25.969234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:41.678 [2024-12-16 19:22:25.969241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:41.678 [2024-12-16 19:22:25.969250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:41.678 [2024-12-16 19:22:25.969256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:41.678 [2024-12-16 19:22:25.969265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:41.678 [2024-12-16 19:22:25.969272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:41.678 [2024-12-16 19:22:25.969280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:41.678 [2024-12-16 19:22:25.969286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:41.678 [2024-12-16 19:22:25.969299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:41.678 [2024-12-16 19:22:25.969307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:41.678 [2024-12-16 19:22:25.969319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.678 [2024-12-16 19:22:25.969326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:41.678 [2024-12-16 19:22:25.969334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:41.678 [2024-12-16 19:22:25.969342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.678 [2024-12-16 19:22:25.969350] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:41.678 [2024-12-16 19:22:25.969360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:41.678 [2024-12-16 19:22:25.969369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:41.678 [2024-12-16 19:22:25.969377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.678 [2024-12-16 19:22:25.969386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:41.678 [2024-12-16 19:22:25.969393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:41.678 [2024-12-16 19:22:25.969402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:41.678 [2024-12-16 19:22:25.969409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:41.678 [2024-12-16 19:22:25.969417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:41.678 [2024-12-16 19:22:25.969424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:41.678 [2024-12-16 19:22:25.969435] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:41.678 [2024-12-16 19:22:25.969445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:41.678 [2024-12-16 19:22:25.969459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:41.678 [2024-12-16 19:22:25.969467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:41.678 [2024-12-16 19:22:25.969476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:41.678 [2024-12-16 19:22:25.969483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:41.678 [2024-12-16 19:22:25.969491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:41.678 [2024-12-16 19:22:25.969499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:41.678 [2024-12-16 19:22:25.969508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:41.678 [2024-12-16 19:22:25.969514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:41.678 [2024-12-16 19:22:25.969523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:41.678 [2024-12-16 19:22:25.969530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:41.678 [2024-12-16 19:22:25.969540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:41.678 [2024-12-16 19:22:25.969547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:41.678 [2024-12-16 19:22:25.969557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:41.678 [2024-12-16 19:22:25.969564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:41.678 [2024-12-16 19:22:25.969575] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:41.678 [2024-12-16 19:22:25.969584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:41.678 [2024-12-16 19:22:25.969597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:41.678 [2024-12-16 19:22:25.969604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:41.678 [2024-12-16 19:22:25.969613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:41.678 [2024-12-16 19:22:25.969620] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:41.678 [2024-12-16 19:22:25.969630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.678 [2024-12-16 19:22:25.969638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:41.678 [2024-12-16 19:22:25.969648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:20:41.678 [2024-12-16 19:22:25.969658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.678 [2024-12-16 19:22:26.001247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.678 [2024-12-16 19:22:26.001292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:41.678 [2024-12-16 19:22:26.001307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.525 ms 00:20:41.678 [2024-12-16 19:22:26.001318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.678 [2024-12-16 19:22:26.001464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.678 [2024-12-16 19:22:26.001476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:41.678 [2024-12-16 19:22:26.001487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:41.678 [2024-12-16 19:22:26.001495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.940 [2024-12-16 19:22:26.036112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.940 [2024-12-16 19:22:26.036160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:41.940 [2024-12-16 19:22:26.036182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.589 ms 00:20:41.940 [2024-12-16 19:22:26.036191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.940 [2024-12-16 19:22:26.036295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.940 [2024-12-16 19:22:26.036307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:41.940 [2024-12-16 19:22:26.036318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:41.940 [2024-12-16 19:22:26.036326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.940 [2024-12-16 19:22:26.036876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.940 [2024-12-16 19:22:26.036908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:41.940 [2024-12-16 19:22:26.036924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:20:41.940 [2024-12-16 19:22:26.036933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.940 [2024-12-16 19:22:26.037092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.940 [2024-12-16 19:22:26.037102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:41.940 [2024-12-16 19:22:26.037113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:20:41.940 [2024-12-16 19:22:26.037121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.940 [2024-12-16 19:22:26.054795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.940 [2024-12-16 19:22:26.054833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:41.940 [2024-12-16 19:22:26.054847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.646 ms 00:20:41.940 [2024-12-16 19:22:26.054856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.940 [2024-12-16 19:22:26.084189] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:41.940 [2024-12-16 19:22:26.084241] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:41.940 [2024-12-16 19:22:26.084260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.940 [2024-12-16 19:22:26.084270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:41.940 [2024-12-16 19:22:26.084283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.276 ms 00:20:41.940 [2024-12-16 19:22:26.084298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.940 [2024-12-16 19:22:26.110272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.940 [2024-12-16 19:22:26.110332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:41.940 [2024-12-16 19:22:26.110351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.861 ms 00:20:41.940 [2024-12-16 19:22:26.110360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.940 [2024-12-16 19:22:26.123552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.940 [2024-12-16 19:22:26.123600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:41.940 [2024-12-16 19:22:26.123619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.063 ms 00:20:41.940 [2024-12-16 19:22:26.123626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.940 [2024-12-16 19:22:26.136184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.940 [2024-12-16 19:22:26.136223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:41.940 [2024-12-16 19:22:26.136238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.450 ms 00:20:41.940 [2024-12-16 19:22:26.136246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.940 [2024-12-16 19:22:26.136969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.940 [2024-12-16 19:22:26.136998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:41.940 [2024-12-16 19:22:26.137011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:20:41.940 [2024-12-16 19:22:26.137019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.940 [2024-12-16 19:22:26.203408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.940 [2024-12-16 19:22:26.203481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:41.940 [2024-12-16 19:22:26.203501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.356 ms 00:20:41.940 [2024-12-16 19:22:26.203510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.940 [2024-12-16 19:22:26.215631] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:41.940 [2024-12-16 19:22:26.235987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.940 [2024-12-16 19:22:26.236044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:41.940 [2024-12-16 19:22:26.236060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.335 ms 00:20:41.940 [2024-12-16 19:22:26.236070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.940 [2024-12-16 19:22:26.236207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.940 [2024-12-16 19:22:26.236222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:41.940 [2024-12-16 19:22:26.236231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:41.940 [2024-12-16 19:22:26.236242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.940 [2024-12-16 19:22:26.236302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.940 [2024-12-16 19:22:26.236314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:41.940 [2024-12-16 19:22:26.236323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:41.940 [2024-12-16 19:22:26.236336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.940 [2024-12-16 19:22:26.236385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.940 [2024-12-16 19:22:26.236399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:41.940 [2024-12-16 19:22:26.236408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:41.940 [2024-12-16 19:22:26.236418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.940 [2024-12-16 19:22:26.236454] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:41.940 [2024-12-16 19:22:26.236469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.940 [2024-12-16 19:22:26.236480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:41.941 [2024-12-16 19:22:26.236491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:41.941 [2024-12-16 19:22:26.236498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.941 [2024-12-16 19:22:26.262561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.941 [2024-12-16 19:22:26.262606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:41.941 [2024-12-16 19:22:26.262622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.030 ms 00:20:41.941 [2024-12-16 19:22:26.262630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.941 [2024-12-16 19:22:26.262743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.941 [2024-12-16 19:22:26.262755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:41.941 [2024-12-16 19:22:26.262766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:41.941 [2024-12-16 19:22:26.262777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.941 [2024-12-16 19:22:26.263844] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:41.941 [2024-12-16 19:22:26.267201] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 328.354 ms, result 0 00:20:41.941 [2024-12-16 19:22:26.269090] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:41.941 Some configs were skipped because the RPC state that can call them passed over. 00:20:42.202 19:22:26 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:42.202 [2024-12-16 19:22:26.509918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.202 [2024-12-16 19:22:26.509989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:42.202 [2024-12-16 19:22:26.510005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.204 ms 00:20:42.202 [2024-12-16 19:22:26.510016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.202 [2024-12-16 19:22:26.510055] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.346 ms, result 0 00:20:42.202 true 00:20:42.202 19:22:26 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:42.463 [2024-12-16 19:22:26.805775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.463 [2024-12-16 19:22:26.805838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:42.463 [2024-12-16 19:22:26.805855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.521 ms 00:20:42.463 [2024-12-16 19:22:26.805864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.463 [2024-12-16 19:22:26.805906] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.662 ms, result 0 00:20:42.463 true 00:20:42.724 19:22:26 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78753 00:20:42.724 19:22:26 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78753 ']' 00:20:42.724 19:22:26 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78753 00:20:42.724 19:22:26 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:42.724 19:22:26 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.724 19:22:26 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78753 00:20:42.724 killing process with pid 78753 00:20:42.724 19:22:26 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:42.724 19:22:26 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:42.724 19:22:26 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78753' 00:20:42.724 19:22:26 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78753 00:20:42.724 19:22:26 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78753 00:20:43.298 [2024-12-16 19:22:27.554447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.298 [2024-12-16 19:22:27.554491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:43.298 [2024-12-16 19:22:27.554502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:43.298 [2024-12-16 19:22:27.554509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.298 [2024-12-16 19:22:27.554528] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:43.298 [2024-12-16 19:22:27.556646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.298 [2024-12-16 19:22:27.556670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:43.298 [2024-12-16 19:22:27.556681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.104 ms 00:20:43.298 [2024-12-16 19:22:27.556687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.298 [2024-12-16 19:22:27.556891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.298 [2024-12-16 19:22:27.556899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:43.298 [2024-12-16 19:22:27.556906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:20:43.298 [2024-12-16 19:22:27.556912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.298 [2024-12-16 19:22:27.560201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.298 [2024-12-16 19:22:27.560227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:43.298 [2024-12-16 19:22:27.560237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.274 ms 00:20:43.298 [2024-12-16 19:22:27.560243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.299 [2024-12-16 19:22:27.565433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.299 [2024-12-16 19:22:27.565460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:43.299 [2024-12-16 19:22:27.565469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.162 ms 00:20:43.299 [2024-12-16 19:22:27.565476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.299 [2024-12-16 19:22:27.572791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.299 [2024-12-16 19:22:27.572823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:43.299 [2024-12-16 19:22:27.572834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.273 ms 00:20:43.299 [2024-12-16 19:22:27.572839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.299 [2024-12-16 19:22:27.579001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.299 [2024-12-16 19:22:27.579029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:43.299 [2024-12-16 19:22:27.579038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.131 ms 00:20:43.299 [2024-12-16 19:22:27.579045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.299 [2024-12-16 19:22:27.579148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.299 [2024-12-16 19:22:27.579156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:43.299 [2024-12-16 19:22:27.579164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:43.299 [2024-12-16 19:22:27.579170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.299 [2024-12-16 19:22:27.586881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.299 [2024-12-16 19:22:27.586916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:43.299 [2024-12-16 19:22:27.586925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.680 ms 00:20:43.299 [2024-12-16 19:22:27.586930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.299 [2024-12-16 19:22:27.593931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.299 [2024-12-16 19:22:27.593957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:43.299 [2024-12-16 19:22:27.593968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.972 ms 00:20:43.299 [2024-12-16 19:22:27.593973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.299 [2024-12-16 19:22:27.600863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.299 [2024-12-16 19:22:27.600889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:43.299 [2024-12-16 19:22:27.600897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.854 ms 00:20:43.299 [2024-12-16 19:22:27.600902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.299 [2024-12-16 19:22:27.607595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.299 [2024-12-16 19:22:27.607620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:43.299 [2024-12-16 19:22:27.607629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.646 ms 00:20:43.299 [2024-12-16 19:22:27.607634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.299 [2024-12-16 19:22:27.607660] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:43.299 [2024-12-16 19:22:27.607672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.607995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.608001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.608008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.608013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.608022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.608027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.608034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.608039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.608047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.608052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.608059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.608064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:43.299 [2024-12-16 19:22:27.608071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:43.300 [2024-12-16 19:22:27.608343] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:43.300 [2024-12-16 19:22:27.608352] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36f840fd-a489-4625-9f63-997690818b8f 00:20:43.300 [2024-12-16 19:22:27.608360] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:43.300 [2024-12-16 19:22:27.608367] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:43.300 [2024-12-16 19:22:27.608372] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:43.300 [2024-12-16 19:22:27.608380] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:43.300 [2024-12-16 19:22:27.608385] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:43.300 [2024-12-16 19:22:27.608392] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:43.300 [2024-12-16 19:22:27.608397] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:43.300 [2024-12-16 19:22:27.608404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:43.300 [2024-12-16 19:22:27.608408] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:43.300 [2024-12-16 19:22:27.608415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.300 [2024-12-16 19:22:27.608421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:43.300 [2024-12-16 19:22:27.608428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:20:43.300 [2024-12-16 19:22:27.608434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.300 [2024-12-16 19:22:27.618035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.300 [2024-12-16 19:22:27.618059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:43.300 [2024-12-16 19:22:27.618069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.583 ms 00:20:43.300 [2024-12-16 19:22:27.618075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.300 [2024-12-16 19:22:27.618370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.300 [2024-12-16 19:22:27.618385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:43.300 [2024-12-16 19:22:27.618394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:20:43.300 [2024-12-16 19:22:27.618400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.561 [2024-12-16 19:22:27.653076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.561 [2024-12-16 19:22:27.653104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:43.561 [2024-12-16 19:22:27.653114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.561 [2024-12-16 19:22:27.653121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.561 [2024-12-16 19:22:27.653203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.561 [2024-12-16 19:22:27.653212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:43.561 [2024-12-16 19:22:27.653221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.561 [2024-12-16 19:22:27.653227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.561 [2024-12-16 19:22:27.653259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.561 [2024-12-16 19:22:27.653266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:43.561 [2024-12-16 19:22:27.653275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.561 [2024-12-16 19:22:27.653280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.561 [2024-12-16 19:22:27.653295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.561 [2024-12-16 19:22:27.653300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:43.561 [2024-12-16 19:22:27.653308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.561 [2024-12-16 19:22:27.653314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.561 [2024-12-16 19:22:27.711717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.561 [2024-12-16 19:22:27.711749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:43.562 [2024-12-16 19:22:27.711760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.562 [2024-12-16 19:22:27.711766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.562 [2024-12-16 19:22:27.759888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.562 [2024-12-16 19:22:27.759922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:43.562 [2024-12-16 19:22:27.759932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.562 [2024-12-16 19:22:27.759940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.562 [2024-12-16 19:22:27.759999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.562 [2024-12-16 19:22:27.760006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:43.562 [2024-12-16 19:22:27.760016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.562 [2024-12-16 19:22:27.760022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.562 [2024-12-16 19:22:27.760045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.562 [2024-12-16 19:22:27.760052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:43.562 [2024-12-16 19:22:27.760059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.562 [2024-12-16 19:22:27.760064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.562 [2024-12-16 19:22:27.760136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.562 [2024-12-16 19:22:27.760143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:43.562 [2024-12-16 19:22:27.760150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.562 [2024-12-16 19:22:27.760156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.562 [2024-12-16 19:22:27.760197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.562 [2024-12-16 19:22:27.760205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:43.562 [2024-12-16 19:22:27.760213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.562 [2024-12-16 19:22:27.760218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.562 [2024-12-16 19:22:27.760250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.562 [2024-12-16 19:22:27.760256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:43.562 [2024-12-16 19:22:27.760265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.562 [2024-12-16 19:22:27.760271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.562 [2024-12-16 19:22:27.760304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.562 [2024-12-16 19:22:27.760311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:43.562 [2024-12-16 19:22:27.760318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.562 [2024-12-16 19:22:27.760323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.562 [2024-12-16 19:22:27.760427] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 205.958 ms, result 0 00:20:44.135 19:22:28 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:44.135 [2024-12-16 19:22:28.343925] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:44.135 [2024-12-16 19:22:28.344053] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78806 ] 00:20:44.397 [2024-12-16 19:22:28.500372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.397 [2024-12-16 19:22:28.585712] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.659 [2024-12-16 19:22:28.794800] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:44.659 [2024-12-16 19:22:28.794851] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:44.659 [2024-12-16 19:22:28.942747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.659 [2024-12-16 19:22:28.942791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:44.659 [2024-12-16 19:22:28.942804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:44.659 [2024-12-16 19:22:28.942811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.659 [2024-12-16 19:22:28.945458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.659 [2024-12-16 19:22:28.945490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:44.659 [2024-12-16 19:22:28.945499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.632 ms 00:20:44.659 [2024-12-16 19:22:28.945507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.659 [2024-12-16 19:22:28.945583] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:44.659 [2024-12-16 19:22:28.946242] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:44.659 [2024-12-16 19:22:28.946267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.659 [2024-12-16 19:22:28.946275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:44.659 [2024-12-16 19:22:28.946283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:20:44.659 [2024-12-16 19:22:28.946290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.659 [2024-12-16 19:22:28.947411] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:44.659 [2024-12-16 19:22:28.960263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.659 [2024-12-16 19:22:28.960300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:44.659 [2024-12-16 19:22:28.960312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.854 ms 00:20:44.659 [2024-12-16 19:22:28.960321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.659 [2024-12-16 19:22:28.960409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.659 [2024-12-16 19:22:28.960420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:44.659 [2024-12-16 19:22:28.960428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:44.659 [2024-12-16 19:22:28.960435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.659 [2024-12-16 19:22:28.965444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.659 [2024-12-16 19:22:28.965474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:44.659 [2024-12-16 19:22:28.965483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.966 ms 00:20:44.659 [2024-12-16 19:22:28.965490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.659 [2024-12-16 19:22:28.965576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.659 [2024-12-16 19:22:28.965585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:44.659 [2024-12-16 19:22:28.965593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:44.659 [2024-12-16 19:22:28.965600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.659 [2024-12-16 19:22:28.965626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.659 [2024-12-16 19:22:28.965635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:44.659 [2024-12-16 19:22:28.965643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:44.659 [2024-12-16 19:22:28.965650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.659 [2024-12-16 19:22:28.965669] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:44.659 [2024-12-16 19:22:28.969182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.659 [2024-12-16 19:22:28.969209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:44.659 [2024-12-16 19:22:28.969218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.517 ms 00:20:44.659 [2024-12-16 19:22:28.969225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.659 [2024-12-16 19:22:28.969261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.659 [2024-12-16 19:22:28.969270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:44.659 [2024-12-16 19:22:28.969278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:44.659 [2024-12-16 19:22:28.969285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.659 [2024-12-16 19:22:28.969304] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:44.659 [2024-12-16 19:22:28.969323] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:44.659 [2024-12-16 19:22:28.969356] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:44.659 [2024-12-16 19:22:28.969371] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:44.659 [2024-12-16 19:22:28.969473] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:44.659 [2024-12-16 19:22:28.969483] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:44.659 [2024-12-16 19:22:28.969494] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:44.659 [2024-12-16 19:22:28.969506] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:44.659 [2024-12-16 19:22:28.969515] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:44.659 [2024-12-16 19:22:28.969523] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:44.659 [2024-12-16 19:22:28.969530] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:44.659 [2024-12-16 19:22:28.969537] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:44.659 [2024-12-16 19:22:28.969544] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:44.659 [2024-12-16 19:22:28.969552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.659 [2024-12-16 19:22:28.969559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:44.659 [2024-12-16 19:22:28.969566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:20:44.659 [2024-12-16 19:22:28.969572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.659 [2024-12-16 19:22:28.969672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.659 [2024-12-16 19:22:28.969684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:44.659 [2024-12-16 19:22:28.969691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:44.659 [2024-12-16 19:22:28.969698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.659 [2024-12-16 19:22:28.969795] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:44.659 [2024-12-16 19:22:28.969812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:44.659 [2024-12-16 19:22:28.969820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:44.659 [2024-12-16 19:22:28.969828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.659 [2024-12-16 19:22:28.969835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:44.659 [2024-12-16 19:22:28.969843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:44.659 [2024-12-16 19:22:28.969850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:44.659 [2024-12-16 19:22:28.969857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:44.659 [2024-12-16 19:22:28.969863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:44.659 [2024-12-16 19:22:28.969870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:44.659 [2024-12-16 19:22:28.969876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:44.659 [2024-12-16 19:22:28.969901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:44.659 [2024-12-16 19:22:28.969908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:44.659 [2024-12-16 19:22:28.969916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:44.659 [2024-12-16 19:22:28.969923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:44.659 [2024-12-16 19:22:28.969929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.659 [2024-12-16 19:22:28.969937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:44.659 [2024-12-16 19:22:28.969943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:44.659 [2024-12-16 19:22:28.969950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.659 [2024-12-16 19:22:28.969957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:44.659 [2024-12-16 19:22:28.969963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:44.659 [2024-12-16 19:22:28.969969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:44.659 [2024-12-16 19:22:28.969976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:44.660 [2024-12-16 19:22:28.969982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:44.660 [2024-12-16 19:22:28.969989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:44.660 [2024-12-16 19:22:28.969996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:44.660 [2024-12-16 19:22:28.970003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:44.660 [2024-12-16 19:22:28.970009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:44.660 [2024-12-16 19:22:28.970016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:44.660 [2024-12-16 19:22:28.970022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:44.660 [2024-12-16 19:22:28.970029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:44.660 [2024-12-16 19:22:28.970035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:44.660 [2024-12-16 19:22:28.970042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:44.660 [2024-12-16 19:22:28.970048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:44.660 [2024-12-16 19:22:28.970054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:44.660 [2024-12-16 19:22:28.970061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:44.660 [2024-12-16 19:22:28.970067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:44.660 [2024-12-16 19:22:28.970073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:44.660 [2024-12-16 19:22:28.970080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:44.660 [2024-12-16 19:22:28.970086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.660 [2024-12-16 19:22:28.970093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:44.660 [2024-12-16 19:22:28.970099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:44.660 [2024-12-16 19:22:28.970105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.660 [2024-12-16 19:22:28.970111] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:44.660 [2024-12-16 19:22:28.970118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:44.660 [2024-12-16 19:22:28.970129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:44.660 [2024-12-16 19:22:28.970136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.660 [2024-12-16 19:22:28.970143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:44.660 [2024-12-16 19:22:28.970150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:44.660 [2024-12-16 19:22:28.970156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:44.660 [2024-12-16 19:22:28.970163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:44.660 [2024-12-16 19:22:28.970169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:44.660 [2024-12-16 19:22:28.970188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:44.660 [2024-12-16 19:22:28.970196] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:44.660 [2024-12-16 19:22:28.970205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:44.660 [2024-12-16 19:22:28.970214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:44.660 [2024-12-16 19:22:28.970221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:44.660 [2024-12-16 19:22:28.970228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:44.660 [2024-12-16 19:22:28.970235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:44.660 [2024-12-16 19:22:28.970242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:44.660 [2024-12-16 19:22:28.970249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:44.660 [2024-12-16 19:22:28.970256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:44.660 [2024-12-16 19:22:28.970264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:44.660 [2024-12-16 19:22:28.970271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:44.660 [2024-12-16 19:22:28.970277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:44.660 [2024-12-16 19:22:28.970285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:44.660 [2024-12-16 19:22:28.970291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:44.660 [2024-12-16 19:22:28.970298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:44.660 [2024-12-16 19:22:28.970306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:44.660 [2024-12-16 19:22:28.970313] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:44.660 [2024-12-16 19:22:28.970321] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:44.660 [2024-12-16 19:22:28.970329] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:44.660 [2024-12-16 19:22:28.970336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:44.660 [2024-12-16 19:22:28.970343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:44.660 [2024-12-16 19:22:28.970350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:44.660 [2024-12-16 19:22:28.970357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.660 [2024-12-16 19:22:28.970367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:44.660 [2024-12-16 19:22:28.970375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.631 ms 00:20:44.660 [2024-12-16 19:22:28.970391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.660 [2024-12-16 19:22:28.996622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.660 [2024-12-16 19:22:28.996657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:44.660 [2024-12-16 19:22:28.996667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.164 ms 00:20:44.660 [2024-12-16 19:22:28.996675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.660 [2024-12-16 19:22:28.996790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.660 [2024-12-16 19:22:28.996800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:44.660 [2024-12-16 19:22:28.996808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:44.660 [2024-12-16 19:22:28.996815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.922 [2024-12-16 19:22:29.044020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.922 [2024-12-16 19:22:29.044073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:44.922 [2024-12-16 19:22:29.044088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.183 ms 00:20:44.922 [2024-12-16 19:22:29.044097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.922 [2024-12-16 19:22:29.044198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.922 [2024-12-16 19:22:29.044210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:44.922 [2024-12-16 19:22:29.044219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:44.922 [2024-12-16 19:22:29.044227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.922 [2024-12-16 19:22:29.044593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.922 [2024-12-16 19:22:29.044621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:44.922 [2024-12-16 19:22:29.044631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:20:44.922 [2024-12-16 19:22:29.044642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.922 [2024-12-16 19:22:29.044772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.922 [2024-12-16 19:22:29.044782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:44.922 [2024-12-16 19:22:29.044790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:20:44.922 [2024-12-16 19:22:29.044797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.922 [2024-12-16 19:22:29.058760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.922 [2024-12-16 19:22:29.058792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:44.922 [2024-12-16 19:22:29.058803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.943 ms 00:20:44.922 [2024-12-16 19:22:29.058811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.922 [2024-12-16 19:22:29.072121] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:44.922 [2024-12-16 19:22:29.072160] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:44.922 [2024-12-16 19:22:29.072181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.922 [2024-12-16 19:22:29.072189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:44.922 [2024-12-16 19:22:29.072198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.268 ms 00:20:44.922 [2024-12-16 19:22:29.072205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.922 [2024-12-16 19:22:29.097255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.922 [2024-12-16 19:22:29.097299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:44.922 [2024-12-16 19:22:29.097311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.974 ms 00:20:44.922 [2024-12-16 19:22:29.097318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.922 [2024-12-16 19:22:29.109638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.922 [2024-12-16 19:22:29.109681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:44.922 [2024-12-16 19:22:29.109691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.238 ms 00:20:44.922 [2024-12-16 19:22:29.109698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.922 [2024-12-16 19:22:29.121813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.922 [2024-12-16 19:22:29.121854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:44.922 [2024-12-16 19:22:29.121865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.039 ms 00:20:44.922 [2024-12-16 19:22:29.121872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.922 [2024-12-16 19:22:29.122529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.922 [2024-12-16 19:22:29.122561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:44.922 [2024-12-16 19:22:29.122571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:20:44.922 [2024-12-16 19:22:29.122578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.922 [2024-12-16 19:22:29.186314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.922 [2024-12-16 19:22:29.186389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:44.922 [2024-12-16 19:22:29.186405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.708 ms 00:20:44.922 [2024-12-16 19:22:29.186414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.922 [2024-12-16 19:22:29.197510] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:44.922 [2024-12-16 19:22:29.216082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.922 [2024-12-16 19:22:29.216131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:44.922 [2024-12-16 19:22:29.216144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.567 ms 00:20:44.922 [2024-12-16 19:22:29.216158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.922 [2024-12-16 19:22:29.216264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.922 [2024-12-16 19:22:29.216276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:44.922 [2024-12-16 19:22:29.216286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:44.922 [2024-12-16 19:22:29.216295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.922 [2024-12-16 19:22:29.216354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.922 [2024-12-16 19:22:29.216364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:44.922 [2024-12-16 19:22:29.216374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:44.922 [2024-12-16 19:22:29.216386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.922 [2024-12-16 19:22:29.216416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.922 [2024-12-16 19:22:29.216426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:44.922 [2024-12-16 19:22:29.216435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:44.922 [2024-12-16 19:22:29.216442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.922 [2024-12-16 19:22:29.216479] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:44.922 [2024-12-16 19:22:29.216490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.922 [2024-12-16 19:22:29.216499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:44.922 [2024-12-16 19:22:29.216507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:44.922 [2024-12-16 19:22:29.216515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.922 [2024-12-16 19:22:29.242240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.922 [2024-12-16 19:22:29.242293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:44.922 [2024-12-16 19:22:29.242306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.703 ms 00:20:44.922 [2024-12-16 19:22:29.242315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.922 [2024-12-16 19:22:29.242437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.922 [2024-12-16 19:22:29.242449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:44.923 [2024-12-16 19:22:29.242459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:44.923 [2024-12-16 19:22:29.242467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.923 [2024-12-16 19:22:29.243543] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:44.923 [2024-12-16 19:22:29.246803] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 300.450 ms, result 0 00:20:44.923 [2024-12-16 19:22:29.248157] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:44.923 [2024-12-16 19:22:29.261563] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:46.310  [2024-12-16T19:22:31.608Z] Copying: 21/256 [MB] (21 MBps) [2024-12-16T19:22:32.552Z] Copying: 40/256 [MB] (19 MBps) [2024-12-16T19:22:33.495Z] Copying: 55/256 [MB] (14 MBps) [2024-12-16T19:22:34.440Z] Copying: 74/256 [MB] (19 MBps) [2024-12-16T19:22:35.384Z] Copying: 97/256 [MB] (22 MBps) [2024-12-16T19:22:36.329Z] Copying: 117/256 [MB] (20 MBps) [2024-12-16T19:22:37.715Z] Copying: 137/256 [MB] (19 MBps) [2024-12-16T19:22:38.661Z] Copying: 153/256 [MB] (16 MBps) [2024-12-16T19:22:39.605Z] Copying: 173/256 [MB] (19 MBps) [2024-12-16T19:22:40.548Z] Copying: 195/256 [MB] (22 MBps) [2024-12-16T19:22:41.496Z] Copying: 218/256 [MB] (22 MBps) [2024-12-16T19:22:42.440Z] Copying: 236/256 [MB] (18 MBps) [2024-12-16T19:22:42.700Z] Copying: 252/256 [MB] (15 MBps) [2024-12-16T19:22:43.274Z] Copying: 256/256 [MB] (average 19 MBps)[2024-12-16 19:22:42.990956] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:58.920 [2024-12-16 19:22:43.001236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.920 [2024-12-16 19:22:43.001278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:58.920 [2024-12-16 19:22:43.001295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:58.920 [2024-12-16 19:22:43.001304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.920 [2024-12-16 19:22:43.001329] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:58.920 [2024-12-16 19:22:43.004206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.920 [2024-12-16 19:22:43.004240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:58.920 [2024-12-16 19:22:43.004250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.862 ms 00:20:58.920 [2024-12-16 19:22:43.004258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.920 [2024-12-16 19:22:43.004536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.920 [2024-12-16 19:22:43.004554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:58.920 [2024-12-16 19:22:43.004563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:20:58.920 [2024-12-16 19:22:43.004570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.920 [2024-12-16 19:22:43.008665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.920 [2024-12-16 19:22:43.008690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:58.920 [2024-12-16 19:22:43.008699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.075 ms 00:20:58.920 [2024-12-16 19:22:43.008707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.920 [2024-12-16 19:22:43.015676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.920 [2024-12-16 19:22:43.015709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:58.920 [2024-12-16 19:22:43.015719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.949 ms 00:20:58.920 [2024-12-16 19:22:43.015726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.920 [2024-12-16 19:22:43.041801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.920 [2024-12-16 19:22:43.041842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:58.920 [2024-12-16 19:22:43.041853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.016 ms 00:20:58.920 [2024-12-16 19:22:43.041861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.920 [2024-12-16 19:22:43.057049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.920 [2024-12-16 19:22:43.057088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:58.920 [2024-12-16 19:22:43.057106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.143 ms 00:20:58.920 [2024-12-16 19:22:43.057113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.920 [2024-12-16 19:22:43.057282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.920 [2024-12-16 19:22:43.057295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:58.920 [2024-12-16 19:22:43.057312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:58.920 [2024-12-16 19:22:43.057321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.920 [2024-12-16 19:22:43.082678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.920 [2024-12-16 19:22:43.082737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:58.920 [2024-12-16 19:22:43.082749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.339 ms 00:20:58.920 [2024-12-16 19:22:43.082756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.920 [2024-12-16 19:22:43.108558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.920 [2024-12-16 19:22:43.108603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:58.920 [2024-12-16 19:22:43.108614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.735 ms 00:20:58.920 [2024-12-16 19:22:43.108622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.920 [2024-12-16 19:22:43.134458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.920 [2024-12-16 19:22:43.134507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:58.920 [2024-12-16 19:22:43.134519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.767 ms 00:20:58.920 [2024-12-16 19:22:43.134527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.920 [2024-12-16 19:22:43.160274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.920 [2024-12-16 19:22:43.160323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:58.920 [2024-12-16 19:22:43.160335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.644 ms 00:20:58.920 [2024-12-16 19:22:43.160342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.920 [2024-12-16 19:22:43.160406] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:58.920 [2024-12-16 19:22:43.160423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:58.920 [2024-12-16 19:22:43.160435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:58.920 [2024-12-16 19:22:43.160444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:58.920 [2024-12-16 19:22:43.160452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:58.920 [2024-12-16 19:22:43.160461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:58.920 [2024-12-16 19:22:43.160473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.160997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:58.921 [2024-12-16 19:22:43.161224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:58.922 [2024-12-16 19:22:43.161233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:58.922 [2024-12-16 19:22:43.161242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:58.922 [2024-12-16 19:22:43.161258] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:58.922 [2024-12-16 19:22:43.161266] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36f840fd-a489-4625-9f63-997690818b8f 00:20:58.922 [2024-12-16 19:22:43.161276] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:58.922 [2024-12-16 19:22:43.161284] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:58.922 [2024-12-16 19:22:43.161292] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:58.922 [2024-12-16 19:22:43.161302] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:58.922 [2024-12-16 19:22:43.161309] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:58.922 [2024-12-16 19:22:43.161318] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:58.922 [2024-12-16 19:22:43.161330] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:58.922 [2024-12-16 19:22:43.161338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:58.922 [2024-12-16 19:22:43.161345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:58.922 [2024-12-16 19:22:43.161353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.922 [2024-12-16 19:22:43.161362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:58.922 [2024-12-16 19:22:43.161371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:20:58.922 [2024-12-16 19:22:43.161379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.922 [2024-12-16 19:22:43.175244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.922 [2024-12-16 19:22:43.175284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:58.922 [2024-12-16 19:22:43.175295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.843 ms 00:20:58.922 [2024-12-16 19:22:43.175303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.922 [2024-12-16 19:22:43.175720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.922 [2024-12-16 19:22:43.175739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:58.922 [2024-12-16 19:22:43.175749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:20:58.922 [2024-12-16 19:22:43.175757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.922 [2024-12-16 19:22:43.215143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.922 [2024-12-16 19:22:43.215208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:58.922 [2024-12-16 19:22:43.215220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.922 [2024-12-16 19:22:43.215235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.922 [2024-12-16 19:22:43.215348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.922 [2024-12-16 19:22:43.215359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:58.922 [2024-12-16 19:22:43.215368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.922 [2024-12-16 19:22:43.215375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.922 [2024-12-16 19:22:43.215433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.922 [2024-12-16 19:22:43.215443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:58.922 [2024-12-16 19:22:43.215452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.922 [2024-12-16 19:22:43.215460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.922 [2024-12-16 19:22:43.215482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.922 [2024-12-16 19:22:43.215496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:58.922 [2024-12-16 19:22:43.215504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.922 [2024-12-16 19:22:43.215512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.183 [2024-12-16 19:22:43.301064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.183 [2024-12-16 19:22:43.301119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:59.183 [2024-12-16 19:22:43.301133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.183 [2024-12-16 19:22:43.301141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.183 [2024-12-16 19:22:43.370690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.183 [2024-12-16 19:22:43.370751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:59.183 [2024-12-16 19:22:43.370763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.183 [2024-12-16 19:22:43.370772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.183 [2024-12-16 19:22:43.370864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.183 [2024-12-16 19:22:43.370875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:59.183 [2024-12-16 19:22:43.370884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.183 [2024-12-16 19:22:43.370894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.183 [2024-12-16 19:22:43.370926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.183 [2024-12-16 19:22:43.370940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:59.183 [2024-12-16 19:22:43.370948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.183 [2024-12-16 19:22:43.370957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.183 [2024-12-16 19:22:43.371053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.183 [2024-12-16 19:22:43.371065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:59.183 [2024-12-16 19:22:43.371074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.183 [2024-12-16 19:22:43.371084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.183 [2024-12-16 19:22:43.371120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.183 [2024-12-16 19:22:43.371130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:59.183 [2024-12-16 19:22:43.371141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.183 [2024-12-16 19:22:43.371150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.183 [2024-12-16 19:22:43.371218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.183 [2024-12-16 19:22:43.371230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:59.183 [2024-12-16 19:22:43.371238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.183 [2024-12-16 19:22:43.371247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.183 [2024-12-16 19:22:43.371296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.184 [2024-12-16 19:22:43.371313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:59.184 [2024-12-16 19:22:43.371322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.184 [2024-12-16 19:22:43.371330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.184 [2024-12-16 19:22:43.371492] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 370.249 ms, result 0 00:21:00.127 00:21:00.127 00:21:00.127 19:22:44 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:00.388 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:21:00.388 19:22:44 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:00.388 19:22:44 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:21:00.388 19:22:44 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:00.388 19:22:44 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:00.388 19:22:44 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:21:00.650 19:22:44 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:00.650 19:22:44 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78753 00:21:00.650 19:22:44 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78753 ']' 00:21:00.650 19:22:44 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78753 00:21:00.650 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78753) - No such process 00:21:00.650 Process with pid 78753 is not found 00:21:00.650 19:22:44 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78753 is not found' 00:21:00.650 00:21:00.650 real 1m16.848s 00:21:00.650 user 1m43.562s 00:21:00.650 sys 0m5.671s 00:21:00.650 19:22:44 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.650 ************************************ 00:21:00.650 END TEST ftl_trim 00:21:00.650 19:22:44 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:00.650 ************************************ 00:21:00.650 19:22:44 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:00.650 19:22:44 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:00.650 19:22:44 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.650 19:22:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:00.650 ************************************ 00:21:00.650 START TEST ftl_restore 00:21:00.650 ************************************ 00:21:00.650 19:22:44 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:00.650 * Looking for test storage... 00:21:00.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:00.650 19:22:44 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:00.650 19:22:44 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:21:00.650 19:22:44 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:00.910 19:22:45 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:00.910 19:22:45 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:21:00.910 19:22:45 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:00.910 19:22:45 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:00.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.910 --rc genhtml_branch_coverage=1 00:21:00.910 --rc genhtml_function_coverage=1 00:21:00.910 --rc genhtml_legend=1 00:21:00.910 --rc geninfo_all_blocks=1 00:21:00.910 --rc geninfo_unexecuted_blocks=1 00:21:00.910 00:21:00.910 ' 00:21:00.910 19:22:45 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:00.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.910 --rc genhtml_branch_coverage=1 00:21:00.910 --rc genhtml_function_coverage=1 00:21:00.910 --rc genhtml_legend=1 00:21:00.910 --rc geninfo_all_blocks=1 00:21:00.910 --rc geninfo_unexecuted_blocks=1 00:21:00.910 00:21:00.910 ' 00:21:00.910 19:22:45 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:00.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.910 --rc genhtml_branch_coverage=1 00:21:00.910 --rc genhtml_function_coverage=1 00:21:00.910 --rc genhtml_legend=1 00:21:00.910 --rc geninfo_all_blocks=1 00:21:00.910 --rc geninfo_unexecuted_blocks=1 00:21:00.910 00:21:00.910 ' 00:21:00.910 19:22:45 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:00.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.910 --rc genhtml_branch_coverage=1 00:21:00.910 --rc genhtml_function_coverage=1 00:21:00.910 --rc genhtml_legend=1 00:21:00.910 --rc geninfo_all_blocks=1 00:21:00.910 --rc geninfo_unexecuted_blocks=1 00:21:00.910 00:21:00.910 ' 00:21:00.910 19:22:45 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:00.910 19:22:45 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:21:00.910 19:22:45 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:00.910 19:22:45 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:00.910 19:22:45 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:00.910 19:22:45 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.mdKzJROmvC 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79042 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79042 00:21:00.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.911 19:22:45 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79042 ']' 00:21:00.911 19:22:45 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.911 19:22:45 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.911 19:22:45 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:00.911 19:22:45 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.911 19:22:45 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.911 19:22:45 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:00.911 [2024-12-16 19:22:45.140127] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:21:00.911 [2024-12-16 19:22:45.140301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79042 ] 00:21:01.172 [2024-12-16 19:22:45.305125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.172 [2024-12-16 19:22:45.429147] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.111 19:22:46 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:02.111 19:22:46 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:21:02.111 19:22:46 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:02.111 19:22:46 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:21:02.111 19:22:46 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:02.111 19:22:46 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:21:02.111 19:22:46 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:21:02.111 19:22:46 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:02.111 19:22:46 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:02.111 19:22:46 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:21:02.111 19:22:46 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:02.111 19:22:46 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:02.111 19:22:46 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:02.111 19:22:46 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:02.111 19:22:46 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:02.111 19:22:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:02.372 19:22:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:02.372 { 00:21:02.372 "name": "nvme0n1", 00:21:02.372 "aliases": [ 00:21:02.372 "136f684f-30fb-4e9b-865d-3e4ff410f3f6" 00:21:02.372 ], 00:21:02.372 "product_name": "NVMe disk", 00:21:02.372 "block_size": 4096, 00:21:02.372 "num_blocks": 1310720, 00:21:02.372 "uuid": "136f684f-30fb-4e9b-865d-3e4ff410f3f6", 00:21:02.372 "numa_id": -1, 00:21:02.372 "assigned_rate_limits": { 00:21:02.372 "rw_ios_per_sec": 0, 00:21:02.372 "rw_mbytes_per_sec": 0, 00:21:02.372 "r_mbytes_per_sec": 0, 00:21:02.372 "w_mbytes_per_sec": 0 00:21:02.372 }, 00:21:02.372 "claimed": true, 00:21:02.372 "claim_type": "read_many_write_one", 00:21:02.372 "zoned": false, 00:21:02.372 "supported_io_types": { 00:21:02.372 "read": true, 00:21:02.372 "write": true, 00:21:02.372 "unmap": true, 00:21:02.372 "flush": true, 00:21:02.372 "reset": true, 00:21:02.372 "nvme_admin": true, 00:21:02.372 "nvme_io": true, 00:21:02.372 "nvme_io_md": false, 00:21:02.372 "write_zeroes": true, 00:21:02.372 "zcopy": false, 00:21:02.372 "get_zone_info": false, 00:21:02.372 "zone_management": false, 00:21:02.372 "zone_append": false, 00:21:02.372 "compare": true, 00:21:02.372 "compare_and_write": false, 00:21:02.372 "abort": true, 00:21:02.372 "seek_hole": false, 00:21:02.372 "seek_data": false, 00:21:02.372 "copy": true, 00:21:02.372 "nvme_iov_md": false 00:21:02.372 }, 00:21:02.372 "driver_specific": { 00:21:02.372 "nvme": [ 00:21:02.372 { 00:21:02.372 "pci_address": "0000:00:11.0", 00:21:02.372 "trid": { 00:21:02.372 "trtype": "PCIe", 00:21:02.372 "traddr": "0000:00:11.0" 00:21:02.372 }, 00:21:02.372 "ctrlr_data": { 00:21:02.372 "cntlid": 0, 00:21:02.372 "vendor_id": "0x1b36", 00:21:02.372 "model_number": "QEMU NVMe Ctrl", 00:21:02.372 "serial_number": "12341", 00:21:02.372 "firmware_revision": "8.0.0", 00:21:02.372 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:02.372 "oacs": { 00:21:02.372 "security": 0, 00:21:02.372 "format": 1, 00:21:02.372 "firmware": 0, 00:21:02.372 "ns_manage": 1 00:21:02.372 }, 00:21:02.372 "multi_ctrlr": false, 00:21:02.372 "ana_reporting": false 00:21:02.372 }, 00:21:02.372 "vs": { 00:21:02.372 "nvme_version": "1.4" 00:21:02.372 }, 00:21:02.372 "ns_data": { 00:21:02.372 "id": 1, 00:21:02.372 "can_share": false 00:21:02.372 } 00:21:02.372 } 00:21:02.372 ], 00:21:02.372 "mp_policy": "active_passive" 00:21:02.372 } 00:21:02.372 } 00:21:02.372 ]' 00:21:02.372 19:22:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:02.372 19:22:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:02.372 19:22:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:02.372 19:22:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:02.372 19:22:46 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:02.372 19:22:46 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:21:02.372 19:22:46 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:21:02.372 19:22:46 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:02.372 19:22:46 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:21:02.372 19:22:46 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:02.372 19:22:46 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:02.633 19:22:46 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=73a17cda-3af7-440a-b00b-2d82ee4edb76 00:21:02.633 19:22:46 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:21:02.633 19:22:46 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 73a17cda-3af7-440a-b00b-2d82ee4edb76 00:21:02.893 19:22:47 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:03.154 19:22:47 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=054173d7-433f-4a23-b4a9-29958cab9565 00:21:03.154 19:22:47 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 054173d7-433f-4a23-b4a9-29958cab9565 00:21:03.418 19:22:47 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=6d08f68a-81c8-44eb-b7a4-9ee0659a0608 00:21:03.418 19:22:47 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:21:03.418 19:22:47 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6d08f68a-81c8-44eb-b7a4-9ee0659a0608 00:21:03.418 19:22:47 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:21:03.418 19:22:47 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:03.418 19:22:47 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=6d08f68a-81c8-44eb-b7a4-9ee0659a0608 00:21:03.418 19:22:47 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:21:03.418 19:22:47 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 6d08f68a-81c8-44eb-b7a4-9ee0659a0608 00:21:03.418 19:22:47 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=6d08f68a-81c8-44eb-b7a4-9ee0659a0608 00:21:03.418 19:22:47 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:03.418 19:22:47 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:03.418 19:22:47 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:03.418 19:22:47 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6d08f68a-81c8-44eb-b7a4-9ee0659a0608 00:21:03.701 19:22:47 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:03.701 { 00:21:03.701 "name": "6d08f68a-81c8-44eb-b7a4-9ee0659a0608", 00:21:03.701 "aliases": [ 00:21:03.701 "lvs/nvme0n1p0" 00:21:03.701 ], 00:21:03.701 "product_name": "Logical Volume", 00:21:03.701 "block_size": 4096, 00:21:03.701 "num_blocks": 26476544, 00:21:03.701 "uuid": "6d08f68a-81c8-44eb-b7a4-9ee0659a0608", 00:21:03.701 "assigned_rate_limits": { 00:21:03.701 "rw_ios_per_sec": 0, 00:21:03.701 "rw_mbytes_per_sec": 0, 00:21:03.701 "r_mbytes_per_sec": 0, 00:21:03.701 "w_mbytes_per_sec": 0 00:21:03.701 }, 00:21:03.701 "claimed": false, 00:21:03.701 "zoned": false, 00:21:03.701 "supported_io_types": { 00:21:03.701 "read": true, 00:21:03.701 "write": true, 00:21:03.701 "unmap": true, 00:21:03.701 "flush": false, 00:21:03.701 "reset": true, 00:21:03.701 "nvme_admin": false, 00:21:03.701 "nvme_io": false, 00:21:03.701 "nvme_io_md": false, 00:21:03.701 "write_zeroes": true, 00:21:03.701 "zcopy": false, 00:21:03.701 "get_zone_info": false, 00:21:03.701 "zone_management": false, 00:21:03.701 "zone_append": false, 00:21:03.701 "compare": false, 00:21:03.701 "compare_and_write": false, 00:21:03.701 "abort": false, 00:21:03.701 "seek_hole": true, 00:21:03.701 "seek_data": true, 00:21:03.701 "copy": false, 00:21:03.701 "nvme_iov_md": false 00:21:03.701 }, 00:21:03.701 "driver_specific": { 00:21:03.701 "lvol": { 00:21:03.701 "lvol_store_uuid": "054173d7-433f-4a23-b4a9-29958cab9565", 00:21:03.701 "base_bdev": "nvme0n1", 00:21:03.701 "thin_provision": true, 00:21:03.701 "num_allocated_clusters": 0, 00:21:03.701 "snapshot": false, 00:21:03.701 "clone": false, 00:21:03.701 "esnap_clone": false 00:21:03.701 } 00:21:03.701 } 00:21:03.701 } 00:21:03.701 ]' 00:21:03.701 19:22:47 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:03.701 19:22:47 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:03.701 19:22:47 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:03.701 19:22:47 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:03.701 19:22:47 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:03.701 19:22:47 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:03.701 19:22:47 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:21:03.701 19:22:47 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:21:03.701 19:22:47 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:03.972 19:22:48 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:03.972 19:22:48 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:03.972 19:22:48 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 6d08f68a-81c8-44eb-b7a4-9ee0659a0608 00:21:03.972 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=6d08f68a-81c8-44eb-b7a4-9ee0659a0608 00:21:03.972 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:03.972 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:03.972 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:03.972 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6d08f68a-81c8-44eb-b7a4-9ee0659a0608 00:21:04.233 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:04.233 { 00:21:04.233 "name": "6d08f68a-81c8-44eb-b7a4-9ee0659a0608", 00:21:04.233 "aliases": [ 00:21:04.233 "lvs/nvme0n1p0" 00:21:04.233 ], 00:21:04.233 "product_name": "Logical Volume", 00:21:04.233 "block_size": 4096, 00:21:04.233 "num_blocks": 26476544, 00:21:04.233 "uuid": "6d08f68a-81c8-44eb-b7a4-9ee0659a0608", 00:21:04.233 "assigned_rate_limits": { 00:21:04.233 "rw_ios_per_sec": 0, 00:21:04.233 "rw_mbytes_per_sec": 0, 00:21:04.233 "r_mbytes_per_sec": 0, 00:21:04.233 "w_mbytes_per_sec": 0 00:21:04.233 }, 00:21:04.233 "claimed": false, 00:21:04.233 "zoned": false, 00:21:04.233 "supported_io_types": { 00:21:04.233 "read": true, 00:21:04.233 "write": true, 00:21:04.233 "unmap": true, 00:21:04.233 "flush": false, 00:21:04.233 "reset": true, 00:21:04.233 "nvme_admin": false, 00:21:04.233 "nvme_io": false, 00:21:04.233 "nvme_io_md": false, 00:21:04.233 "write_zeroes": true, 00:21:04.233 "zcopy": false, 00:21:04.233 "get_zone_info": false, 00:21:04.233 "zone_management": false, 00:21:04.233 "zone_append": false, 00:21:04.233 "compare": false, 00:21:04.233 "compare_and_write": false, 00:21:04.233 "abort": false, 00:21:04.233 "seek_hole": true, 00:21:04.233 "seek_data": true, 00:21:04.233 "copy": false, 00:21:04.233 "nvme_iov_md": false 00:21:04.233 }, 00:21:04.233 "driver_specific": { 00:21:04.233 "lvol": { 00:21:04.233 "lvol_store_uuid": "054173d7-433f-4a23-b4a9-29958cab9565", 00:21:04.233 "base_bdev": "nvme0n1", 00:21:04.233 "thin_provision": true, 00:21:04.233 "num_allocated_clusters": 0, 00:21:04.233 "snapshot": false, 00:21:04.233 "clone": false, 00:21:04.233 "esnap_clone": false 00:21:04.233 } 00:21:04.233 } 00:21:04.233 } 00:21:04.233 ]' 00:21:04.233 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:04.233 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:04.233 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:04.233 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:04.233 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:04.233 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:04.233 19:22:48 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:21:04.233 19:22:48 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:04.495 19:22:48 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:21:04.495 19:22:48 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 6d08f68a-81c8-44eb-b7a4-9ee0659a0608 00:21:04.495 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=6d08f68a-81c8-44eb-b7a4-9ee0659a0608 00:21:04.495 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:04.495 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:04.495 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:04.495 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6d08f68a-81c8-44eb-b7a4-9ee0659a0608 00:21:04.495 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:04.495 { 00:21:04.495 "name": "6d08f68a-81c8-44eb-b7a4-9ee0659a0608", 00:21:04.495 "aliases": [ 00:21:04.495 "lvs/nvme0n1p0" 00:21:04.495 ], 00:21:04.495 "product_name": "Logical Volume", 00:21:04.495 "block_size": 4096, 00:21:04.495 "num_blocks": 26476544, 00:21:04.495 "uuid": "6d08f68a-81c8-44eb-b7a4-9ee0659a0608", 00:21:04.495 "assigned_rate_limits": { 00:21:04.495 "rw_ios_per_sec": 0, 00:21:04.495 "rw_mbytes_per_sec": 0, 00:21:04.495 "r_mbytes_per_sec": 0, 00:21:04.495 "w_mbytes_per_sec": 0 00:21:04.495 }, 00:21:04.495 "claimed": false, 00:21:04.495 "zoned": false, 00:21:04.495 "supported_io_types": { 00:21:04.495 "read": true, 00:21:04.495 "write": true, 00:21:04.495 "unmap": true, 00:21:04.495 "flush": false, 00:21:04.495 "reset": true, 00:21:04.495 "nvme_admin": false, 00:21:04.495 "nvme_io": false, 00:21:04.495 "nvme_io_md": false, 00:21:04.495 "write_zeroes": true, 00:21:04.495 "zcopy": false, 00:21:04.495 "get_zone_info": false, 00:21:04.495 "zone_management": false, 00:21:04.495 "zone_append": false, 00:21:04.495 "compare": false, 00:21:04.495 "compare_and_write": false, 00:21:04.495 "abort": false, 00:21:04.495 "seek_hole": true, 00:21:04.495 "seek_data": true, 00:21:04.495 "copy": false, 00:21:04.495 "nvme_iov_md": false 00:21:04.495 }, 00:21:04.495 "driver_specific": { 00:21:04.495 "lvol": { 00:21:04.495 "lvol_store_uuid": "054173d7-433f-4a23-b4a9-29958cab9565", 00:21:04.495 "base_bdev": "nvme0n1", 00:21:04.495 "thin_provision": true, 00:21:04.495 "num_allocated_clusters": 0, 00:21:04.495 "snapshot": false, 00:21:04.495 "clone": false, 00:21:04.495 "esnap_clone": false 00:21:04.495 } 00:21:04.495 } 00:21:04.495 } 00:21:04.495 ]' 00:21:04.495 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:04.758 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:04.758 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:04.758 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:04.758 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:04.758 19:22:48 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:04.758 19:22:48 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:21:04.758 19:22:48 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 6d08f68a-81c8-44eb-b7a4-9ee0659a0608 --l2p_dram_limit 10' 00:21:04.758 19:22:48 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:21:04.758 19:22:48 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:04.758 19:22:48 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:04.758 19:22:48 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:21:04.758 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:21:04.758 19:22:48 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6d08f68a-81c8-44eb-b7a4-9ee0659a0608 --l2p_dram_limit 10 -c nvc0n1p0 00:21:04.758 [2024-12-16 19:22:49.074818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.758 [2024-12-16 19:22:49.074859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:04.758 [2024-12-16 19:22:49.074871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:04.758 [2024-12-16 19:22:49.074878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.758 [2024-12-16 19:22:49.074922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.758 [2024-12-16 19:22:49.074930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:04.758 [2024-12-16 19:22:49.074937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:04.758 [2024-12-16 19:22:49.074943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.758 [2024-12-16 19:22:49.074962] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:04.758 [2024-12-16 19:22:49.075532] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:04.758 [2024-12-16 19:22:49.075556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.758 [2024-12-16 19:22:49.075562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:04.758 [2024-12-16 19:22:49.075571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:21:04.758 [2024-12-16 19:22:49.075577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.758 [2024-12-16 19:22:49.075647] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2ff538f2-e73b-43af-beae-0346febb4f7c 00:21:04.758 [2024-12-16 19:22:49.076599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.758 [2024-12-16 19:22:49.076629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:04.758 [2024-12-16 19:22:49.076638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:04.758 [2024-12-16 19:22:49.076645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.758 [2024-12-16 19:22:49.081393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.758 [2024-12-16 19:22:49.081424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:04.758 [2024-12-16 19:22:49.081431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.714 ms 00:21:04.758 [2024-12-16 19:22:49.081438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.758 [2024-12-16 19:22:49.081504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.758 [2024-12-16 19:22:49.081514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:04.758 [2024-12-16 19:22:49.081520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:04.758 [2024-12-16 19:22:49.081530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.758 [2024-12-16 19:22:49.081570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.758 [2024-12-16 19:22:49.081579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:04.758 [2024-12-16 19:22:49.081585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:04.758 [2024-12-16 19:22:49.081595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.758 [2024-12-16 19:22:49.081611] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:04.758 [2024-12-16 19:22:49.084581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.758 [2024-12-16 19:22:49.084607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:04.758 [2024-12-16 19:22:49.084617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.972 ms 00:21:04.758 [2024-12-16 19:22:49.084623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.758 [2024-12-16 19:22:49.084651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.758 [2024-12-16 19:22:49.084658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:04.758 [2024-12-16 19:22:49.084665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:04.758 [2024-12-16 19:22:49.084671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.758 [2024-12-16 19:22:49.084690] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:04.758 [2024-12-16 19:22:49.084798] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:04.758 [2024-12-16 19:22:49.084811] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:04.758 [2024-12-16 19:22:49.084819] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:04.758 [2024-12-16 19:22:49.084829] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:04.758 [2024-12-16 19:22:49.084835] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:04.758 [2024-12-16 19:22:49.084843] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:04.758 [2024-12-16 19:22:49.084849] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:04.758 [2024-12-16 19:22:49.084859] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:04.758 [2024-12-16 19:22:49.084865] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:04.758 [2024-12-16 19:22:49.084872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.758 [2024-12-16 19:22:49.084882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:04.758 [2024-12-16 19:22:49.084889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:21:04.758 [2024-12-16 19:22:49.084894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.758 [2024-12-16 19:22:49.084960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.758 [2024-12-16 19:22:49.084967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:04.758 [2024-12-16 19:22:49.084973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:04.758 [2024-12-16 19:22:49.084979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.758 [2024-12-16 19:22:49.085055] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:04.758 [2024-12-16 19:22:49.085062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:04.758 [2024-12-16 19:22:49.085070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:04.758 [2024-12-16 19:22:49.085075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.758 [2024-12-16 19:22:49.085082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:04.758 [2024-12-16 19:22:49.085087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:04.758 [2024-12-16 19:22:49.085093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:04.758 [2024-12-16 19:22:49.085098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:04.758 [2024-12-16 19:22:49.085105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:04.758 [2024-12-16 19:22:49.085110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:04.758 [2024-12-16 19:22:49.085116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:04.758 [2024-12-16 19:22:49.085121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:04.758 [2024-12-16 19:22:49.085128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:04.758 [2024-12-16 19:22:49.085133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:04.758 [2024-12-16 19:22:49.085139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:04.758 [2024-12-16 19:22:49.085145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.758 [2024-12-16 19:22:49.085153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:04.759 [2024-12-16 19:22:49.085158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:04.759 [2024-12-16 19:22:49.085164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.759 [2024-12-16 19:22:49.085178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:04.759 [2024-12-16 19:22:49.085185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:04.759 [2024-12-16 19:22:49.085190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.759 [2024-12-16 19:22:49.085196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:04.759 [2024-12-16 19:22:49.085201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:04.759 [2024-12-16 19:22:49.085207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.759 [2024-12-16 19:22:49.085212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:04.759 [2024-12-16 19:22:49.085218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:04.759 [2024-12-16 19:22:49.085223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.759 [2024-12-16 19:22:49.085229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:04.759 [2024-12-16 19:22:49.085235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:04.759 [2024-12-16 19:22:49.085241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.759 [2024-12-16 19:22:49.085246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:04.759 [2024-12-16 19:22:49.085254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:04.759 [2024-12-16 19:22:49.085259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:04.759 [2024-12-16 19:22:49.085265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:04.759 [2024-12-16 19:22:49.085270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:04.759 [2024-12-16 19:22:49.085276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:04.759 [2024-12-16 19:22:49.085280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:04.759 [2024-12-16 19:22:49.085287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:04.759 [2024-12-16 19:22:49.085292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.759 [2024-12-16 19:22:49.085299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:04.759 [2024-12-16 19:22:49.085304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:04.759 [2024-12-16 19:22:49.085310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.759 [2024-12-16 19:22:49.085315] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:04.759 [2024-12-16 19:22:49.085322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:04.759 [2024-12-16 19:22:49.085327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:04.759 [2024-12-16 19:22:49.085334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.759 [2024-12-16 19:22:49.085340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:04.759 [2024-12-16 19:22:49.085348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:04.759 [2024-12-16 19:22:49.085353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:04.759 [2024-12-16 19:22:49.085360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:04.759 [2024-12-16 19:22:49.085365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:04.759 [2024-12-16 19:22:49.085371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:04.759 [2024-12-16 19:22:49.085377] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:04.759 [2024-12-16 19:22:49.085385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:04.759 [2024-12-16 19:22:49.085393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:04.759 [2024-12-16 19:22:49.085399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:04.759 [2024-12-16 19:22:49.085405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:04.759 [2024-12-16 19:22:49.085411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:04.759 [2024-12-16 19:22:49.085416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:04.759 [2024-12-16 19:22:49.085423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:04.759 [2024-12-16 19:22:49.085429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:04.759 [2024-12-16 19:22:49.085436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:04.759 [2024-12-16 19:22:49.085441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:04.759 [2024-12-16 19:22:49.085449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:04.759 [2024-12-16 19:22:49.085454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:04.759 [2024-12-16 19:22:49.085461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:04.759 [2024-12-16 19:22:49.085466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:04.759 [2024-12-16 19:22:49.085473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:04.759 [2024-12-16 19:22:49.085478] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:04.759 [2024-12-16 19:22:49.085485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:04.759 [2024-12-16 19:22:49.085491] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:04.759 [2024-12-16 19:22:49.085498] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:04.759 [2024-12-16 19:22:49.085503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:04.759 [2024-12-16 19:22:49.085510] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:04.759 [2024-12-16 19:22:49.085515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.759 [2024-12-16 19:22:49.085522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:04.759 [2024-12-16 19:22:49.085527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:21:04.759 [2024-12-16 19:22:49.085534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.759 [2024-12-16 19:22:49.085574] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:04.759 [2024-12-16 19:22:49.085585] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:08.972 [2024-12-16 19:22:52.967887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.972 [2024-12-16 19:22:52.967983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:08.972 [2024-12-16 19:22:52.968002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3882.295 ms 00:21:08.972 [2024-12-16 19:22:52.968013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.972 [2024-12-16 19:22:52.999905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.972 [2024-12-16 19:22:52.999980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:08.972 [2024-12-16 19:22:52.999994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.628 ms 00:21:08.972 [2024-12-16 19:22:53.000005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.972 [2024-12-16 19:22:53.000150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.972 [2024-12-16 19:22:53.000165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:08.972 [2024-12-16 19:22:53.000193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:08.972 [2024-12-16 19:22:53.000209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.972 [2024-12-16 19:22:53.035815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.972 [2024-12-16 19:22:53.035873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:08.972 [2024-12-16 19:22:53.035885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.553 ms 00:21:08.972 [2024-12-16 19:22:53.035895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.972 [2024-12-16 19:22:53.035931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.972 [2024-12-16 19:22:53.035946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:08.972 [2024-12-16 19:22:53.035956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:08.972 [2024-12-16 19:22:53.035974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.972 [2024-12-16 19:22:53.036593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.972 [2024-12-16 19:22:53.036638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:08.972 [2024-12-16 19:22:53.036649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:21:08.972 [2024-12-16 19:22:53.036659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.972 [2024-12-16 19:22:53.036776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.972 [2024-12-16 19:22:53.036797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:08.972 [2024-12-16 19:22:53.036810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:21:08.972 [2024-12-16 19:22:53.036823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.972 [2024-12-16 19:22:53.054246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.972 [2024-12-16 19:22:53.054296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:08.972 [2024-12-16 19:22:53.054307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.406 ms 00:21:08.972 [2024-12-16 19:22:53.054317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.972 [2024-12-16 19:22:53.076384] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:08.972 [2024-12-16 19:22:53.080558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.972 [2024-12-16 19:22:53.080607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:08.972 [2024-12-16 19:22:53.080626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.140 ms 00:21:08.972 [2024-12-16 19:22:53.080636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.972 [2024-12-16 19:22:53.182729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.972 [2024-12-16 19:22:53.182799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:08.972 [2024-12-16 19:22:53.182821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.035 ms 00:21:08.972 [2024-12-16 19:22:53.182830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.972 [2024-12-16 19:22:53.183042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.972 [2024-12-16 19:22:53.183059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:08.972 [2024-12-16 19:22:53.183074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:21:08.972 [2024-12-16 19:22:53.183082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.972 [2024-12-16 19:22:53.209229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.972 [2024-12-16 19:22:53.209280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:08.972 [2024-12-16 19:22:53.209297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.086 ms 00:21:08.972 [2024-12-16 19:22:53.209306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.972 [2024-12-16 19:22:53.234351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.972 [2024-12-16 19:22:53.234413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:08.972 [2024-12-16 19:22:53.234430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.986 ms 00:21:08.972 [2024-12-16 19:22:53.234438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.972 [2024-12-16 19:22:53.235050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.972 [2024-12-16 19:22:53.235069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:08.972 [2024-12-16 19:22:53.235081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:21:08.972 [2024-12-16 19:22:53.235091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.972 [2024-12-16 19:22:53.317647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.972 [2024-12-16 19:22:53.317705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:08.973 [2024-12-16 19:22:53.317724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.508 ms 00:21:08.973 [2024-12-16 19:22:53.317733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.235 [2024-12-16 19:22:53.345442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.235 [2024-12-16 19:22:53.345496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:09.235 [2024-12-16 19:22:53.345513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.610 ms 00:21:09.235 [2024-12-16 19:22:53.345521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.235 [2024-12-16 19:22:53.371114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.235 [2024-12-16 19:22:53.371165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:09.235 [2024-12-16 19:22:53.371191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.538 ms 00:21:09.235 [2024-12-16 19:22:53.371200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.235 [2024-12-16 19:22:53.397410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.235 [2024-12-16 19:22:53.397466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:09.235 [2024-12-16 19:22:53.397482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.157 ms 00:21:09.235 [2024-12-16 19:22:53.397490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.235 [2024-12-16 19:22:53.397545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.235 [2024-12-16 19:22:53.397555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:09.235 [2024-12-16 19:22:53.397570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:09.235 [2024-12-16 19:22:53.397578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.235 [2024-12-16 19:22:53.397677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.235 [2024-12-16 19:22:53.397691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:09.235 [2024-12-16 19:22:53.397702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:21:09.235 [2024-12-16 19:22:53.397710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.235 [2024-12-16 19:22:53.399268] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4323.922 ms, result 0 00:21:09.235 { 00:21:09.235 "name": "ftl0", 00:21:09.235 "uuid": "2ff538f2-e73b-43af-beae-0346febb4f7c" 00:21:09.235 } 00:21:09.235 19:22:53 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:09.235 19:22:53 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:09.497 19:22:53 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:09.497 19:22:53 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:09.497 [2024-12-16 19:22:53.830218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.497 [2024-12-16 19:22:53.830286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:09.497 [2024-12-16 19:22:53.830301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:09.497 [2024-12-16 19:22:53.830312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.497 [2024-12-16 19:22:53.830337] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:09.497 [2024-12-16 19:22:53.833373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.497 [2024-12-16 19:22:53.833412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:09.497 [2024-12-16 19:22:53.833426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.013 ms 00:21:09.497 [2024-12-16 19:22:53.833434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.497 [2024-12-16 19:22:53.833710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.497 [2024-12-16 19:22:53.833724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:09.497 [2024-12-16 19:22:53.833736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:21:09.497 [2024-12-16 19:22:53.833744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.497 [2024-12-16 19:22:53.836995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.497 [2024-12-16 19:22:53.837022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:09.497 [2024-12-16 19:22:53.837034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.233 ms 00:21:09.497 [2024-12-16 19:22:53.837043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.497 [2024-12-16 19:22:53.843199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.497 [2024-12-16 19:22:53.843243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:09.497 [2024-12-16 19:22:53.843260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.134 ms 00:21:09.497 [2024-12-16 19:22:53.843269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.760 [2024-12-16 19:22:53.870243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.760 [2024-12-16 19:22:53.870292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:09.760 [2024-12-16 19:22:53.870308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.886 ms 00:21:09.760 [2024-12-16 19:22:53.870316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.760 [2024-12-16 19:22:53.887396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.760 [2024-12-16 19:22:53.887446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:09.760 [2024-12-16 19:22:53.887461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.023 ms 00:21:09.760 [2024-12-16 19:22:53.887470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.760 [2024-12-16 19:22:53.887642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.760 [2024-12-16 19:22:53.887655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:09.760 [2024-12-16 19:22:53.887667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:21:09.760 [2024-12-16 19:22:53.887675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.760 [2024-12-16 19:22:53.913444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.760 [2024-12-16 19:22:53.913492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:09.760 [2024-12-16 19:22:53.913507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.742 ms 00:21:09.760 [2024-12-16 19:22:53.913514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.760 [2024-12-16 19:22:53.938829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.760 [2024-12-16 19:22:53.938873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:09.760 [2024-12-16 19:22:53.938887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.260 ms 00:21:09.760 [2024-12-16 19:22:53.938895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.760 [2024-12-16 19:22:53.963359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.760 [2024-12-16 19:22:53.963407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:09.760 [2024-12-16 19:22:53.963421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.410 ms 00:21:09.760 [2024-12-16 19:22:53.963429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.760 [2024-12-16 19:22:53.987988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.760 [2024-12-16 19:22:53.988034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:09.760 [2024-12-16 19:22:53.988048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.462 ms 00:21:09.760 [2024-12-16 19:22:53.988055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.760 [2024-12-16 19:22:53.988103] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:09.760 [2024-12-16 19:22:53.988118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:09.760 [2024-12-16 19:22:53.988584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.988891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.989053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.989069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.989080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.989088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.989098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.989105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.989114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.989122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.989132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.989140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.989149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.989157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.989168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.989186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.989196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:09.761 [2024-12-16 19:22:53.989211] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:09.761 [2024-12-16 19:22:53.989221] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2ff538f2-e73b-43af-beae-0346febb4f7c 00:21:09.761 [2024-12-16 19:22:53.989229] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:09.761 [2024-12-16 19:22:53.989241] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:09.761 [2024-12-16 19:22:53.989252] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:09.761 [2024-12-16 19:22:53.989262] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:09.761 [2024-12-16 19:22:53.989269] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:09.761 [2024-12-16 19:22:53.989279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:09.761 [2024-12-16 19:22:53.989287] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:09.761 [2024-12-16 19:22:53.989295] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:09.761 [2024-12-16 19:22:53.989302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:09.761 [2024-12-16 19:22:53.989310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.761 [2024-12-16 19:22:53.989318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:09.761 [2024-12-16 19:22:53.989330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.209 ms 00:21:09.761 [2024-12-16 19:22:53.989339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.761 [2024-12-16 19:22:54.002738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.761 [2024-12-16 19:22:54.002781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:09.761 [2024-12-16 19:22:54.002795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.353 ms 00:21:09.761 [2024-12-16 19:22:54.002803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.761 [2024-12-16 19:22:54.003229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.761 [2024-12-16 19:22:54.003248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:09.761 [2024-12-16 19:22:54.003262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:21:09.761 [2024-12-16 19:22:54.003270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.761 [2024-12-16 19:22:54.049504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.761 [2024-12-16 19:22:54.049554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:09.761 [2024-12-16 19:22:54.049568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.761 [2024-12-16 19:22:54.049577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.761 [2024-12-16 19:22:54.049646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.761 [2024-12-16 19:22:54.049655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:09.761 [2024-12-16 19:22:54.049667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.761 [2024-12-16 19:22:54.049676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.761 [2024-12-16 19:22:54.049777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.761 [2024-12-16 19:22:54.049788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:09.761 [2024-12-16 19:22:54.049798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.761 [2024-12-16 19:22:54.049806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.761 [2024-12-16 19:22:54.049829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.761 [2024-12-16 19:22:54.049837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:09.761 [2024-12-16 19:22:54.049847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.761 [2024-12-16 19:22:54.049856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.023 [2024-12-16 19:22:54.132843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.023 [2024-12-16 19:22:54.132898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:10.023 [2024-12-16 19:22:54.132912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.023 [2024-12-16 19:22:54.132921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.023 [2024-12-16 19:22:54.201090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.023 [2024-12-16 19:22:54.201148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:10.023 [2024-12-16 19:22:54.201163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.023 [2024-12-16 19:22:54.201186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.023 [2024-12-16 19:22:54.201272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.024 [2024-12-16 19:22:54.201283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:10.024 [2024-12-16 19:22:54.201294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.024 [2024-12-16 19:22:54.201303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.024 [2024-12-16 19:22:54.201376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.024 [2024-12-16 19:22:54.201386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:10.024 [2024-12-16 19:22:54.201397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.024 [2024-12-16 19:22:54.201405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.024 [2024-12-16 19:22:54.201508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.024 [2024-12-16 19:22:54.201518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:10.024 [2024-12-16 19:22:54.201529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.024 [2024-12-16 19:22:54.201537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.024 [2024-12-16 19:22:54.201579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.024 [2024-12-16 19:22:54.201588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:10.024 [2024-12-16 19:22:54.201598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.024 [2024-12-16 19:22:54.201607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.024 [2024-12-16 19:22:54.201653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.024 [2024-12-16 19:22:54.201662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:10.024 [2024-12-16 19:22:54.201673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.024 [2024-12-16 19:22:54.201681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.024 [2024-12-16 19:22:54.201734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.024 [2024-12-16 19:22:54.201744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:10.024 [2024-12-16 19:22:54.201755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.024 [2024-12-16 19:22:54.201762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.024 [2024-12-16 19:22:54.201913] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 371.651 ms, result 0 00:21:10.024 true 00:21:10.024 19:22:54 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79042 00:21:10.024 19:22:54 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79042 ']' 00:21:10.024 19:22:54 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79042 00:21:10.024 19:22:54 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:21:10.024 19:22:54 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:10.024 19:22:54 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79042 00:21:10.024 killing process with pid 79042 00:21:10.024 19:22:54 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:10.024 19:22:54 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:10.024 19:22:54 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79042' 00:21:10.024 19:22:54 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79042 00:21:10.024 19:22:54 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79042 00:21:16.610 19:22:59 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:19.914 262144+0 records in 00:21:19.914 262144+0 records out 00:21:19.914 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.11398 s, 261 MB/s 00:21:19.914 19:23:04 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:21.300 19:23:05 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:21.561 [2024-12-16 19:23:05.686627] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:21:21.561 [2024-12-16 19:23:05.686826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79278 ] 00:21:21.562 [2024-12-16 19:23:05.841256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.822 [2024-12-16 19:23:05.960597] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.083 [2024-12-16 19:23:06.257223] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:22.083 [2024-12-16 19:23:06.257307] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:22.083 [2024-12-16 19:23:06.419945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.083 [2024-12-16 19:23:06.420279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:22.083 [2024-12-16 19:23:06.420307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:22.083 [2024-12-16 19:23:06.420317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.083 [2024-12-16 19:23:06.420394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.083 [2024-12-16 19:23:06.420407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:22.083 [2024-12-16 19:23:06.420417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:22.083 [2024-12-16 19:23:06.420425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.083 [2024-12-16 19:23:06.420449] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:22.083 [2024-12-16 19:23:06.421220] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:22.083 [2024-12-16 19:23:06.421240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.083 [2024-12-16 19:23:06.421249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:22.083 [2024-12-16 19:23:06.421259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.797 ms 00:21:22.083 [2024-12-16 19:23:06.421268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.083 [2024-12-16 19:23:06.423007] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:22.346 [2024-12-16 19:23:06.437687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.346 [2024-12-16 19:23:06.437742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:22.346 [2024-12-16 19:23:06.437757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.682 ms 00:21:22.346 [2024-12-16 19:23:06.437765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.346 [2024-12-16 19:23:06.437857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.346 [2024-12-16 19:23:06.437868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:22.346 [2024-12-16 19:23:06.437877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:22.346 [2024-12-16 19:23:06.437885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.346 [2024-12-16 19:23:06.446388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.346 [2024-12-16 19:23:06.446443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:22.346 [2024-12-16 19:23:06.446454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.421 ms 00:21:22.346 [2024-12-16 19:23:06.446468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.346 [2024-12-16 19:23:06.446551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.346 [2024-12-16 19:23:06.446560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:22.346 [2024-12-16 19:23:06.446569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:21:22.346 [2024-12-16 19:23:06.446578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.346 [2024-12-16 19:23:06.446626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.346 [2024-12-16 19:23:06.446635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:22.346 [2024-12-16 19:23:06.446643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:22.346 [2024-12-16 19:23:06.446651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.346 [2024-12-16 19:23:06.446678] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:22.346 [2024-12-16 19:23:06.450682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.346 [2024-12-16 19:23:06.450723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:22.346 [2024-12-16 19:23:06.450738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.008 ms 00:21:22.346 [2024-12-16 19:23:06.450746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.346 [2024-12-16 19:23:06.450788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.346 [2024-12-16 19:23:06.450798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:22.346 [2024-12-16 19:23:06.450807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:22.346 [2024-12-16 19:23:06.450815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.346 [2024-12-16 19:23:06.450872] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:22.346 [2024-12-16 19:23:06.450897] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:22.346 [2024-12-16 19:23:06.450934] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:22.346 [2024-12-16 19:23:06.450954] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:22.346 [2024-12-16 19:23:06.451059] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:22.346 [2024-12-16 19:23:06.451071] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:22.346 [2024-12-16 19:23:06.451082] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:22.346 [2024-12-16 19:23:06.451092] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:22.346 [2024-12-16 19:23:06.451101] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:22.346 [2024-12-16 19:23:06.451111] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:22.346 [2024-12-16 19:23:06.451119] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:22.346 [2024-12-16 19:23:06.451127] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:22.346 [2024-12-16 19:23:06.451138] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:22.346 [2024-12-16 19:23:06.451146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.346 [2024-12-16 19:23:06.451154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:22.346 [2024-12-16 19:23:06.451161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:21:22.346 [2024-12-16 19:23:06.451191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.346 [2024-12-16 19:23:06.451280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.346 [2024-12-16 19:23:06.451291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:22.346 [2024-12-16 19:23:06.451300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:22.346 [2024-12-16 19:23:06.451308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.346 [2024-12-16 19:23:06.451414] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:22.346 [2024-12-16 19:23:06.451425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:22.346 [2024-12-16 19:23:06.451434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:22.346 [2024-12-16 19:23:06.451442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.346 [2024-12-16 19:23:06.451450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:22.346 [2024-12-16 19:23:06.451458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:22.346 [2024-12-16 19:23:06.451465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:22.346 [2024-12-16 19:23:06.451472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:22.346 [2024-12-16 19:23:06.451478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:22.346 [2024-12-16 19:23:06.451486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:22.346 [2024-12-16 19:23:06.451493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:22.346 [2024-12-16 19:23:06.451501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:22.346 [2024-12-16 19:23:06.451508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:22.346 [2024-12-16 19:23:06.451523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:22.346 [2024-12-16 19:23:06.451530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:22.346 [2024-12-16 19:23:06.451536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.346 [2024-12-16 19:23:06.451543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:22.346 [2024-12-16 19:23:06.451550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:22.346 [2024-12-16 19:23:06.451556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.346 [2024-12-16 19:23:06.451564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:22.346 [2024-12-16 19:23:06.451571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:22.346 [2024-12-16 19:23:06.451578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:22.346 [2024-12-16 19:23:06.451585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:22.346 [2024-12-16 19:23:06.451592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:22.346 [2024-12-16 19:23:06.451599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:22.346 [2024-12-16 19:23:06.451605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:22.346 [2024-12-16 19:23:06.451612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:22.346 [2024-12-16 19:23:06.451619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:22.346 [2024-12-16 19:23:06.451626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:22.346 [2024-12-16 19:23:06.451633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:22.346 [2024-12-16 19:23:06.451639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:22.346 [2024-12-16 19:23:06.451646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:22.346 [2024-12-16 19:23:06.451654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:22.346 [2024-12-16 19:23:06.451660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:22.346 [2024-12-16 19:23:06.451667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:22.347 [2024-12-16 19:23:06.451674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:22.347 [2024-12-16 19:23:06.451680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:22.347 [2024-12-16 19:23:06.451686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:22.347 [2024-12-16 19:23:06.451693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:22.347 [2024-12-16 19:23:06.451700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.347 [2024-12-16 19:23:06.451706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:22.347 [2024-12-16 19:23:06.451714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:22.347 [2024-12-16 19:23:06.451721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.347 [2024-12-16 19:23:06.451728] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:22.347 [2024-12-16 19:23:06.451737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:22.347 [2024-12-16 19:23:06.451745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:22.347 [2024-12-16 19:23:06.451753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.347 [2024-12-16 19:23:06.451761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:22.347 [2024-12-16 19:23:06.451767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:22.347 [2024-12-16 19:23:06.451774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:22.347 [2024-12-16 19:23:06.451781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:22.347 [2024-12-16 19:23:06.451788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:22.347 [2024-12-16 19:23:06.451795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:22.347 [2024-12-16 19:23:06.451804] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:22.347 [2024-12-16 19:23:06.451814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:22.347 [2024-12-16 19:23:06.451824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:22.347 [2024-12-16 19:23:06.451832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:22.347 [2024-12-16 19:23:06.451841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:22.347 [2024-12-16 19:23:06.451850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:22.347 [2024-12-16 19:23:06.451857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:22.347 [2024-12-16 19:23:06.451865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:22.347 [2024-12-16 19:23:06.451872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:22.347 [2024-12-16 19:23:06.451880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:22.347 [2024-12-16 19:23:06.451887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:22.347 [2024-12-16 19:23:06.451895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:22.347 [2024-12-16 19:23:06.451902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:22.347 [2024-12-16 19:23:06.451909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:22.347 [2024-12-16 19:23:06.451917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:22.347 [2024-12-16 19:23:06.451925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:22.347 [2024-12-16 19:23:06.451932] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:22.347 [2024-12-16 19:23:06.451940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:22.347 [2024-12-16 19:23:06.451948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:22.347 [2024-12-16 19:23:06.451956] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:22.347 [2024-12-16 19:23:06.451963] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:22.347 [2024-12-16 19:23:06.451970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:22.347 [2024-12-16 19:23:06.451981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.347 [2024-12-16 19:23:06.451990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:22.347 [2024-12-16 19:23:06.451997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:21:22.347 [2024-12-16 19:23:06.452004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.347 [2024-12-16 19:23:06.484378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.347 [2024-12-16 19:23:06.484429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:22.347 [2024-12-16 19:23:06.484441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.330 ms 00:21:22.347 [2024-12-16 19:23:06.484454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.347 [2024-12-16 19:23:06.484540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.347 [2024-12-16 19:23:06.484549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:22.347 [2024-12-16 19:23:06.484557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:21:22.347 [2024-12-16 19:23:06.484566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.347 [2024-12-16 19:23:06.532556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.347 [2024-12-16 19:23:06.532614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:22.347 [2024-12-16 19:23:06.532628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.926 ms 00:21:22.347 [2024-12-16 19:23:06.532637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.347 [2024-12-16 19:23:06.532689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.347 [2024-12-16 19:23:06.532700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:22.347 [2024-12-16 19:23:06.532712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:22.347 [2024-12-16 19:23:06.532721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.347 [2024-12-16 19:23:06.533336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.347 [2024-12-16 19:23:06.533368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:22.347 [2024-12-16 19:23:06.533379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:21:22.347 [2024-12-16 19:23:06.533388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.347 [2024-12-16 19:23:06.533553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.347 [2024-12-16 19:23:06.533620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:22.347 [2024-12-16 19:23:06.533636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:21:22.347 [2024-12-16 19:23:06.533645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.347 [2024-12-16 19:23:06.549763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.347 [2024-12-16 19:23:06.549811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:22.347 [2024-12-16 19:23:06.549823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.094 ms 00:21:22.347 [2024-12-16 19:23:06.549831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.347 [2024-12-16 19:23:06.564710] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:22.347 [2024-12-16 19:23:06.564928] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:22.347 [2024-12-16 19:23:06.564950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.347 [2024-12-16 19:23:06.564959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:22.347 [2024-12-16 19:23:06.564970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.006 ms 00:21:22.347 [2024-12-16 19:23:06.564977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.347 [2024-12-16 19:23:06.591209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.347 [2024-12-16 19:23:06.591415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:22.347 [2024-12-16 19:23:06.591437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.181 ms 00:21:22.347 [2024-12-16 19:23:06.591447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.347 [2024-12-16 19:23:06.604885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.347 [2024-12-16 19:23:06.604947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:22.347 [2024-12-16 19:23:06.604963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.073 ms 00:21:22.347 [2024-12-16 19:23:06.604971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.347 [2024-12-16 19:23:06.618134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.347 [2024-12-16 19:23:06.618199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:22.347 [2024-12-16 19:23:06.618212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.108 ms 00:21:22.347 [2024-12-16 19:23:06.618220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.347 [2024-12-16 19:23:06.618914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.347 [2024-12-16 19:23:06.618949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:22.347 [2024-12-16 19:23:06.618961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:21:22.347 [2024-12-16 19:23:06.618973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.347 [2024-12-16 19:23:06.686158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.347 [2024-12-16 19:23:06.686244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:22.347 [2024-12-16 19:23:06.686260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.162 ms 00:21:22.347 [2024-12-16 19:23:06.686277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.609 [2024-12-16 19:23:06.697869] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:22.609 [2024-12-16 19:23:06.701360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.609 [2024-12-16 19:23:06.701410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:22.609 [2024-12-16 19:23:06.701422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.022 ms 00:21:22.609 [2024-12-16 19:23:06.701432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.609 [2024-12-16 19:23:06.701527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.609 [2024-12-16 19:23:06.701540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:22.609 [2024-12-16 19:23:06.701550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:22.609 [2024-12-16 19:23:06.701559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.609 [2024-12-16 19:23:06.701634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.609 [2024-12-16 19:23:06.701647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:22.609 [2024-12-16 19:23:06.701656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:22.609 [2024-12-16 19:23:06.701663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.609 [2024-12-16 19:23:06.701685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.609 [2024-12-16 19:23:06.701694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:22.609 [2024-12-16 19:23:06.701702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:22.609 [2024-12-16 19:23:06.701710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.609 [2024-12-16 19:23:06.701747] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:22.609 [2024-12-16 19:23:06.701760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.609 [2024-12-16 19:23:06.701768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:22.609 [2024-12-16 19:23:06.701777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:22.609 [2024-12-16 19:23:06.701786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.609 [2024-12-16 19:23:06.728188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.609 [2024-12-16 19:23:06.728386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:22.609 [2024-12-16 19:23:06.728410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.383 ms 00:21:22.609 [2024-12-16 19:23:06.728427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.609 [2024-12-16 19:23:06.728507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.609 [2024-12-16 19:23:06.728517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:22.609 [2024-12-16 19:23:06.728526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:22.609 [2024-12-16 19:23:06.728534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.609 [2024-12-16 19:23:06.729961] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 309.529 ms, result 0 00:21:23.552  [2024-12-16T19:23:08.848Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-16T19:23:09.791Z] Copying: 20/1024 [MB] (10 MBps) [2024-12-16T19:23:11.176Z] Copying: 49/1024 [MB] (28 MBps) [2024-12-16T19:23:11.750Z] Copying: 95/1024 [MB] (46 MBps) [2024-12-16T19:23:13.137Z] Copying: 109/1024 [MB] (13 MBps) [2024-12-16T19:23:14.082Z] Copying: 123/1024 [MB] (14 MBps) [2024-12-16T19:23:15.024Z] Copying: 144/1024 [MB] (20 MBps) [2024-12-16T19:23:15.971Z] Copying: 163/1024 [MB] (19 MBps) [2024-12-16T19:23:16.914Z] Copying: 174/1024 [MB] (10 MBps) [2024-12-16T19:23:17.858Z] Copying: 185/1024 [MB] (10 MBps) [2024-12-16T19:23:18.852Z] Copying: 196/1024 [MB] (11 MBps) [2024-12-16T19:23:19.802Z] Copying: 222/1024 [MB] (25 MBps) [2024-12-16T19:23:20.746Z] Copying: 245/1024 [MB] (23 MBps) [2024-12-16T19:23:22.133Z] Copying: 264/1024 [MB] (18 MBps) [2024-12-16T19:23:23.077Z] Copying: 282/1024 [MB] (18 MBps) [2024-12-16T19:23:24.021Z] Copying: 293/1024 [MB] (10 MBps) [2024-12-16T19:23:24.964Z] Copying: 303/1024 [MB] (10 MBps) [2024-12-16T19:23:25.912Z] Copying: 324/1024 [MB] (20 MBps) [2024-12-16T19:23:26.855Z] Copying: 352/1024 [MB] (28 MBps) [2024-12-16T19:23:27.798Z] Copying: 385/1024 [MB] (32 MBps) [2024-12-16T19:23:29.184Z] Copying: 432/1024 [MB] (47 MBps) [2024-12-16T19:23:29.756Z] Copying: 450/1024 [MB] (18 MBps) [2024-12-16T19:23:31.143Z] Copying: 473/1024 [MB] (23 MBps) [2024-12-16T19:23:32.085Z] Copying: 495/1024 [MB] (21 MBps) [2024-12-16T19:23:33.029Z] Copying: 511/1024 [MB] (15 MBps) [2024-12-16T19:23:33.972Z] Copying: 533/1024 [MB] (22 MBps) [2024-12-16T19:23:34.916Z] Copying: 555/1024 [MB] (21 MBps) [2024-12-16T19:23:35.857Z] Copying: 573/1024 [MB] (18 MBps) [2024-12-16T19:23:36.799Z] Copying: 593/1024 [MB] (19 MBps) [2024-12-16T19:23:37.743Z] Copying: 610/1024 [MB] (17 MBps) [2024-12-16T19:23:39.129Z] Copying: 626/1024 [MB] (15 MBps) [2024-12-16T19:23:40.072Z] Copying: 647/1024 [MB] (20 MBps) [2024-12-16T19:23:41.015Z] Copying: 666/1024 [MB] (18 MBps) [2024-12-16T19:23:41.958Z] Copying: 682/1024 [MB] (16 MBps) [2024-12-16T19:23:42.904Z] Copying: 693/1024 [MB] (10 MBps) [2024-12-16T19:23:43.847Z] Copying: 730/1024 [MB] (37 MBps) [2024-12-16T19:23:44.789Z] Copying: 782/1024 [MB] (51 MBps) [2024-12-16T19:23:45.788Z] Copying: 828/1024 [MB] (46 MBps) [2024-12-16T19:23:46.781Z] Copying: 845/1024 [MB] (16 MBps) [2024-12-16T19:23:48.168Z] Copying: 857/1024 [MB] (11 MBps) [2024-12-16T19:23:49.110Z] Copying: 899/1024 [MB] (42 MBps) [2024-12-16T19:23:50.053Z] Copying: 941/1024 [MB] (41 MBps) [2024-12-16T19:23:50.997Z] Copying: 961/1024 [MB] (19 MBps) [2024-12-16T19:23:51.940Z] Copying: 978/1024 [MB] (17 MBps) [2024-12-16T19:23:52.883Z] Copying: 995/1024 [MB] (16 MBps) [2024-12-16T19:23:53.456Z] Copying: 1014/1024 [MB] (19 MBps) [2024-12-16T19:23:53.456Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-16 19:23:53.175322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.102 [2024-12-16 19:23:53.175382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:09.102 [2024-12-16 19:23:53.175397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:09.102 [2024-12-16 19:23:53.175407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.102 [2024-12-16 19:23:53.175429] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:09.102 [2024-12-16 19:23:53.178553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.102 [2024-12-16 19:23:53.178592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:09.102 [2024-12-16 19:23:53.178604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.108 ms 00:22:09.102 [2024-12-16 19:23:53.178620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.102 [2024-12-16 19:23:53.180659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.102 [2024-12-16 19:23:53.180859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:09.102 [2024-12-16 19:23:53.180881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.009 ms 00:22:09.102 [2024-12-16 19:23:53.180890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.102 [2024-12-16 19:23:53.197741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.102 [2024-12-16 19:23:53.197807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:09.102 [2024-12-16 19:23:53.197820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.828 ms 00:22:09.102 [2024-12-16 19:23:53.197836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.102 [2024-12-16 19:23:53.203973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.102 [2024-12-16 19:23:53.204161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:09.102 [2024-12-16 19:23:53.204197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.087 ms 00:22:09.102 [2024-12-16 19:23:53.204206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.102 [2024-12-16 19:23:53.231657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.102 [2024-12-16 19:23:53.231862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:09.102 [2024-12-16 19:23:53.231883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.395 ms 00:22:09.102 [2024-12-16 19:23:53.231891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.102 [2024-12-16 19:23:53.248782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.102 [2024-12-16 19:23:53.248835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:09.102 [2024-12-16 19:23:53.248849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.806 ms 00:22:09.102 [2024-12-16 19:23:53.248857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.102 [2024-12-16 19:23:53.249014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.102 [2024-12-16 19:23:53.249029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:09.102 [2024-12-16 19:23:53.249040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:22:09.102 [2024-12-16 19:23:53.249048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.102 [2024-12-16 19:23:53.275188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.102 [2024-12-16 19:23:53.275233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:09.102 [2024-12-16 19:23:53.275245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.124 ms 00:22:09.102 [2024-12-16 19:23:53.275252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.102 [2024-12-16 19:23:53.301441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.102 [2024-12-16 19:23:53.301490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:09.102 [2024-12-16 19:23:53.301502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.138 ms 00:22:09.102 [2024-12-16 19:23:53.301509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.102 [2024-12-16 19:23:53.327470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.102 [2024-12-16 19:23:53.327519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:09.102 [2024-12-16 19:23:53.327530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.909 ms 00:22:09.102 [2024-12-16 19:23:53.327537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.102 [2024-12-16 19:23:53.353311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.102 [2024-12-16 19:23:53.353360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:09.103 [2024-12-16 19:23:53.353372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.684 ms 00:22:09.103 [2024-12-16 19:23:53.353379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.103 [2024-12-16 19:23:53.353427] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:09.103 [2024-12-16 19:23:53.353443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.353999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.354007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.354015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.354022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.354030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.354037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.354044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.354051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.354059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.354066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.354075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.354084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.354091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.354098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.354105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:09.103 [2024-12-16 19:23:53.354112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:09.104 [2024-12-16 19:23:53.354120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:09.104 [2024-12-16 19:23:53.354127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:09.104 [2024-12-16 19:23:53.354134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:09.104 [2024-12-16 19:23:53.354142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:09.104 [2024-12-16 19:23:53.354150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:09.104 [2024-12-16 19:23:53.354157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:09.104 [2024-12-16 19:23:53.354165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:09.104 [2024-12-16 19:23:53.354196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:09.104 [2024-12-16 19:23:53.354204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:09.104 [2024-12-16 19:23:53.354212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:09.104 [2024-12-16 19:23:53.354220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:09.104 [2024-12-16 19:23:53.354238] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:09.104 [2024-12-16 19:23:53.354250] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2ff538f2-e73b-43af-beae-0346febb4f7c 00:22:09.104 [2024-12-16 19:23:53.354258] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:09.104 [2024-12-16 19:23:53.354266] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:09.104 [2024-12-16 19:23:53.354273] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:09.104 [2024-12-16 19:23:53.354281] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:09.104 [2024-12-16 19:23:53.354288] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:09.104 [2024-12-16 19:23:53.354305] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:09.104 [2024-12-16 19:23:53.354332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:09.104 [2024-12-16 19:23:53.354339] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:09.104 [2024-12-16 19:23:53.354346] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:09.104 [2024-12-16 19:23:53.354353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.104 [2024-12-16 19:23:53.354361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:09.104 [2024-12-16 19:23:53.354372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.927 ms 00:22:09.104 [2024-12-16 19:23:53.354380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.104 [2024-12-16 19:23:53.368217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.104 [2024-12-16 19:23:53.368262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:09.104 [2024-12-16 19:23:53.368274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.788 ms 00:22:09.104 [2024-12-16 19:23:53.368282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.104 [2024-12-16 19:23:53.368669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.104 [2024-12-16 19:23:53.368679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:09.104 [2024-12-16 19:23:53.368688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:22:09.104 [2024-12-16 19:23:53.368703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.104 [2024-12-16 19:23:53.405399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.104 [2024-12-16 19:23:53.405600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:09.104 [2024-12-16 19:23:53.405621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.104 [2024-12-16 19:23:53.405631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.104 [2024-12-16 19:23:53.405699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.104 [2024-12-16 19:23:53.405707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:09.104 [2024-12-16 19:23:53.405716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.104 [2024-12-16 19:23:53.405731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.104 [2024-12-16 19:23:53.405802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.104 [2024-12-16 19:23:53.405813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:09.104 [2024-12-16 19:23:53.405822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.104 [2024-12-16 19:23:53.405829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.104 [2024-12-16 19:23:53.405845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.104 [2024-12-16 19:23:53.405853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:09.104 [2024-12-16 19:23:53.405862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.104 [2024-12-16 19:23:53.405870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.365 [2024-12-16 19:23:53.491388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.365 [2024-12-16 19:23:53.491447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:09.365 [2024-12-16 19:23:53.491461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.365 [2024-12-16 19:23:53.491469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.365 [2024-12-16 19:23:53.562358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.365 [2024-12-16 19:23:53.562428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:09.365 [2024-12-16 19:23:53.562441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.365 [2024-12-16 19:23:53.562457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.365 [2024-12-16 19:23:53.562539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.365 [2024-12-16 19:23:53.562550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:09.365 [2024-12-16 19:23:53.562559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.365 [2024-12-16 19:23:53.562568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.365 [2024-12-16 19:23:53.562605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.365 [2024-12-16 19:23:53.562615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:09.365 [2024-12-16 19:23:53.562624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.365 [2024-12-16 19:23:53.562632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.365 [2024-12-16 19:23:53.562733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.365 [2024-12-16 19:23:53.562744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:09.365 [2024-12-16 19:23:53.562753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.365 [2024-12-16 19:23:53.562762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.365 [2024-12-16 19:23:53.562795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.365 [2024-12-16 19:23:53.562806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:09.365 [2024-12-16 19:23:53.562815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.365 [2024-12-16 19:23:53.562824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.365 [2024-12-16 19:23:53.562866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.365 [2024-12-16 19:23:53.562878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:09.365 [2024-12-16 19:23:53.562887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.365 [2024-12-16 19:23:53.562895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.365 [2024-12-16 19:23:53.562942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.365 [2024-12-16 19:23:53.562952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:09.365 [2024-12-16 19:23:53.562961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.365 [2024-12-16 19:23:53.562970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.365 [2024-12-16 19:23:53.563103] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 387.752 ms, result 0 00:22:10.309 00:22:10.309 00:22:10.570 19:23:54 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:10.570 [2024-12-16 19:23:54.748489] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:10.570 [2024-12-16 19:23:54.748637] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79792 ] 00:22:10.570 [2024-12-16 19:23:54.912697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.831 [2024-12-16 19:23:55.033314] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.091 [2024-12-16 19:23:55.333560] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:11.091 [2024-12-16 19:23:55.333647] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:11.353 [2024-12-16 19:23:55.495024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.353 [2024-12-16 19:23:55.495310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:11.353 [2024-12-16 19:23:55.495335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:11.353 [2024-12-16 19:23:55.495345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.353 [2024-12-16 19:23:55.495424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.353 [2024-12-16 19:23:55.495439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:11.353 [2024-12-16 19:23:55.495449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:11.353 [2024-12-16 19:23:55.495457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.353 [2024-12-16 19:23:55.495480] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:11.353 [2024-12-16 19:23:55.496255] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:11.353 [2024-12-16 19:23:55.496276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.353 [2024-12-16 19:23:55.496286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:11.353 [2024-12-16 19:23:55.496296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:22:11.353 [2024-12-16 19:23:55.496305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.353 [2024-12-16 19:23:55.498012] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:11.353 [2024-12-16 19:23:55.512714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.353 [2024-12-16 19:23:55.512768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:11.353 [2024-12-16 19:23:55.512785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.704 ms 00:22:11.353 [2024-12-16 19:23:55.512793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.353 [2024-12-16 19:23:55.512884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.353 [2024-12-16 19:23:55.512895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:11.353 [2024-12-16 19:23:55.512904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:11.353 [2024-12-16 19:23:55.512912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.353 [2024-12-16 19:23:55.521496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.353 [2024-12-16 19:23:55.521545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:11.353 [2024-12-16 19:23:55.521556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.499 ms 00:22:11.353 [2024-12-16 19:23:55.521571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.353 [2024-12-16 19:23:55.521655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.353 [2024-12-16 19:23:55.521664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:11.353 [2024-12-16 19:23:55.521673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:22:11.353 [2024-12-16 19:23:55.521682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.354 [2024-12-16 19:23:55.521729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.354 [2024-12-16 19:23:55.521739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:11.354 [2024-12-16 19:23:55.521748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:11.354 [2024-12-16 19:23:55.521755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.354 [2024-12-16 19:23:55.521783] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:11.354 [2024-12-16 19:23:55.526098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.354 [2024-12-16 19:23:55.526143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:11.354 [2024-12-16 19:23:55.526157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.322 ms 00:22:11.354 [2024-12-16 19:23:55.526165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.354 [2024-12-16 19:23:55.526220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.354 [2024-12-16 19:23:55.526230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:11.354 [2024-12-16 19:23:55.526239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:11.354 [2024-12-16 19:23:55.526247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.354 [2024-12-16 19:23:55.526304] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:11.354 [2024-12-16 19:23:55.526329] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:11.354 [2024-12-16 19:23:55.526366] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:11.354 [2024-12-16 19:23:55.526384] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:11.354 [2024-12-16 19:23:55.526506] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:11.354 [2024-12-16 19:23:55.526519] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:11.354 [2024-12-16 19:23:55.526530] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:11.354 [2024-12-16 19:23:55.526540] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:11.354 [2024-12-16 19:23:55.526550] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:11.354 [2024-12-16 19:23:55.526559] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:11.354 [2024-12-16 19:23:55.526567] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:11.354 [2024-12-16 19:23:55.526574] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:11.354 [2024-12-16 19:23:55.526585] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:11.354 [2024-12-16 19:23:55.526594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.354 [2024-12-16 19:23:55.526601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:11.354 [2024-12-16 19:23:55.526609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:22:11.354 [2024-12-16 19:23:55.526617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.354 [2024-12-16 19:23:55.526701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.354 [2024-12-16 19:23:55.526710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:11.354 [2024-12-16 19:23:55.526718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:11.354 [2024-12-16 19:23:55.526726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.354 [2024-12-16 19:23:55.526827] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:11.354 [2024-12-16 19:23:55.526838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:11.354 [2024-12-16 19:23:55.526847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:11.354 [2024-12-16 19:23:55.526856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.354 [2024-12-16 19:23:55.526863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:11.354 [2024-12-16 19:23:55.526870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:11.354 [2024-12-16 19:23:55.526877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:11.354 [2024-12-16 19:23:55.526884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:11.354 [2024-12-16 19:23:55.526892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:11.354 [2024-12-16 19:23:55.526899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:11.354 [2024-12-16 19:23:55.526905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:11.354 [2024-12-16 19:23:55.526912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:11.354 [2024-12-16 19:23:55.526919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:11.354 [2024-12-16 19:23:55.526936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:11.354 [2024-12-16 19:23:55.526943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:11.354 [2024-12-16 19:23:55.526951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.354 [2024-12-16 19:23:55.526958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:11.354 [2024-12-16 19:23:55.526965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:11.354 [2024-12-16 19:23:55.526972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.354 [2024-12-16 19:23:55.526979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:11.354 [2024-12-16 19:23:55.526987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:11.354 [2024-12-16 19:23:55.526993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.354 [2024-12-16 19:23:55.527000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:11.354 [2024-12-16 19:23:55.527007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:11.354 [2024-12-16 19:23:55.527014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.354 [2024-12-16 19:23:55.527021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:11.354 [2024-12-16 19:23:55.527027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:11.354 [2024-12-16 19:23:55.527034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.354 [2024-12-16 19:23:55.527041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:11.354 [2024-12-16 19:23:55.527048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:11.354 [2024-12-16 19:23:55.527054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.354 [2024-12-16 19:23:55.527062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:11.354 [2024-12-16 19:23:55.527069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:11.354 [2024-12-16 19:23:55.527076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:11.354 [2024-12-16 19:23:55.527083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:11.354 [2024-12-16 19:23:55.527089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:11.354 [2024-12-16 19:23:55.527096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:11.354 [2024-12-16 19:23:55.527103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:11.354 [2024-12-16 19:23:55.527110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:11.354 [2024-12-16 19:23:55.527117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.354 [2024-12-16 19:23:55.527124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:11.354 [2024-12-16 19:23:55.527131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:11.354 [2024-12-16 19:23:55.527138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.354 [2024-12-16 19:23:55.527144] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:11.354 [2024-12-16 19:23:55.527152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:11.354 [2024-12-16 19:23:55.527161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:11.354 [2024-12-16 19:23:55.527169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.354 [2024-12-16 19:23:55.527203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:11.354 [2024-12-16 19:23:55.527211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:11.354 [2024-12-16 19:23:55.527218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:11.354 [2024-12-16 19:23:55.527225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:11.354 [2024-12-16 19:23:55.527232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:11.354 [2024-12-16 19:23:55.527239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:11.354 [2024-12-16 19:23:55.527247] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:11.354 [2024-12-16 19:23:55.527257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:11.354 [2024-12-16 19:23:55.527269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:11.354 [2024-12-16 19:23:55.527277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:11.354 [2024-12-16 19:23:55.527294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:11.354 [2024-12-16 19:23:55.527301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:11.354 [2024-12-16 19:23:55.527308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:11.354 [2024-12-16 19:23:55.527316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:11.354 [2024-12-16 19:23:55.527324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:11.354 [2024-12-16 19:23:55.527331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:11.354 [2024-12-16 19:23:55.527339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:11.354 [2024-12-16 19:23:55.527346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:11.354 [2024-12-16 19:23:55.527353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:11.354 [2024-12-16 19:23:55.527360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:11.354 [2024-12-16 19:23:55.527367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:11.354 [2024-12-16 19:23:55.527376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:11.355 [2024-12-16 19:23:55.527383] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:11.355 [2024-12-16 19:23:55.527392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:11.355 [2024-12-16 19:23:55.527400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:11.355 [2024-12-16 19:23:55.527408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:11.355 [2024-12-16 19:23:55.527415] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:11.355 [2024-12-16 19:23:55.527423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:11.355 [2024-12-16 19:23:55.527431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.355 [2024-12-16 19:23:55.527438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:11.355 [2024-12-16 19:23:55.527448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:22:11.355 [2024-12-16 19:23:55.527455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.355 [2024-12-16 19:23:55.559898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.355 [2024-12-16 19:23:55.559954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:11.355 [2024-12-16 19:23:55.559967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.396 ms 00:22:11.355 [2024-12-16 19:23:55.559981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.355 [2024-12-16 19:23:55.560069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.355 [2024-12-16 19:23:55.560079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:11.355 [2024-12-16 19:23:55.560088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:11.355 [2024-12-16 19:23:55.560096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.355 [2024-12-16 19:23:55.605520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.355 [2024-12-16 19:23:55.605577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:11.355 [2024-12-16 19:23:55.605591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.357 ms 00:22:11.355 [2024-12-16 19:23:55.605601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.355 [2024-12-16 19:23:55.605653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.355 [2024-12-16 19:23:55.605665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:11.355 [2024-12-16 19:23:55.605678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:11.355 [2024-12-16 19:23:55.605686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.355 [2024-12-16 19:23:55.606326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.355 [2024-12-16 19:23:55.606359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:11.355 [2024-12-16 19:23:55.606370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:22:11.355 [2024-12-16 19:23:55.606378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.355 [2024-12-16 19:23:55.606551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.355 [2024-12-16 19:23:55.606562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:11.355 [2024-12-16 19:23:55.606575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:22:11.355 [2024-12-16 19:23:55.606583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.355 [2024-12-16 19:23:55.622660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.355 [2024-12-16 19:23:55.622710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:11.355 [2024-12-16 19:23:55.622722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.056 ms 00:22:11.355 [2024-12-16 19:23:55.622729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.355 [2024-12-16 19:23:55.637039] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:11.355 [2024-12-16 19:23:55.637092] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:11.355 [2024-12-16 19:23:55.637106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.355 [2024-12-16 19:23:55.637115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:11.355 [2024-12-16 19:23:55.637125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.260 ms 00:22:11.355 [2024-12-16 19:23:55.637133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.355 [2024-12-16 19:23:55.663576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.355 [2024-12-16 19:23:55.663626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:11.355 [2024-12-16 19:23:55.663639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.364 ms 00:22:11.355 [2024-12-16 19:23:55.663647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.355 [2024-12-16 19:23:55.677003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.355 [2024-12-16 19:23:55.677053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:11.355 [2024-12-16 19:23:55.677066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.284 ms 00:22:11.355 [2024-12-16 19:23:55.677074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.355 [2024-12-16 19:23:55.690353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.355 [2024-12-16 19:23:55.690576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:11.355 [2024-12-16 19:23:55.690598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.227 ms 00:22:11.355 [2024-12-16 19:23:55.690606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.355 [2024-12-16 19:23:55.691368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.355 [2024-12-16 19:23:55.691404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:11.355 [2024-12-16 19:23:55.691420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:22:11.355 [2024-12-16 19:23:55.691428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.616 [2024-12-16 19:23:55.758605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.616 [2024-12-16 19:23:55.758675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:11.616 [2024-12-16 19:23:55.758699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.155 ms 00:22:11.616 [2024-12-16 19:23:55.758708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.616 [2024-12-16 19:23:55.770249] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:11.616 [2024-12-16 19:23:55.773465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.616 [2024-12-16 19:23:55.773665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:11.616 [2024-12-16 19:23:55.773687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.696 ms 00:22:11.616 [2024-12-16 19:23:55.773696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.616 [2024-12-16 19:23:55.773788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.616 [2024-12-16 19:23:55.773799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:11.616 [2024-12-16 19:23:55.773810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:11.616 [2024-12-16 19:23:55.773822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.616 [2024-12-16 19:23:55.773898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.616 [2024-12-16 19:23:55.773908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:11.616 [2024-12-16 19:23:55.773917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:11.616 [2024-12-16 19:23:55.773926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.616 [2024-12-16 19:23:55.773947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.616 [2024-12-16 19:23:55.773957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:11.616 [2024-12-16 19:23:55.773965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:11.616 [2024-12-16 19:23:55.773973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.616 [2024-12-16 19:23:55.774012] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:11.616 [2024-12-16 19:23:55.774023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.616 [2024-12-16 19:23:55.774031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:11.616 [2024-12-16 19:23:55.774039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:11.617 [2024-12-16 19:23:55.774048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.617 [2024-12-16 19:23:55.800607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.617 [2024-12-16 19:23:55.800802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:11.617 [2024-12-16 19:23:55.800834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.540 ms 00:22:11.617 [2024-12-16 19:23:55.800843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.617 [2024-12-16 19:23:55.800932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.617 [2024-12-16 19:23:55.800942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:11.617 [2024-12-16 19:23:55.800952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:11.617 [2024-12-16 19:23:55.800961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.617 [2024-12-16 19:23:55.802471] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 306.902 ms, result 0 00:22:13.004  [2024-12-16T19:23:58.300Z] Copying: 20/1024 [MB] (20 MBps) [2024-12-16T19:23:59.244Z] Copying: 35/1024 [MB] (15 MBps) [2024-12-16T19:24:00.185Z] Copying: 50/1024 [MB] (14 MBps) [2024-12-16T19:24:01.128Z] Copying: 61/1024 [MB] (10 MBps) [2024-12-16T19:24:02.072Z] Copying: 79/1024 [MB] (17 MBps) [2024-12-16T19:24:03.018Z] Copying: 90/1024 [MB] (10 MBps) [2024-12-16T19:24:04.404Z] Copying: 100/1024 [MB] (10 MBps) [2024-12-16T19:24:05.346Z] Copying: 111/1024 [MB] (10 MBps) [2024-12-16T19:24:06.287Z] Copying: 131/1024 [MB] (19 MBps) [2024-12-16T19:24:07.228Z] Copying: 147/1024 [MB] (16 MBps) [2024-12-16T19:24:08.171Z] Copying: 158/1024 [MB] (10 MBps) [2024-12-16T19:24:09.115Z] Copying: 168/1024 [MB] (10 MBps) [2024-12-16T19:24:10.129Z] Copying: 179/1024 [MB] (10 MBps) [2024-12-16T19:24:11.071Z] Copying: 190/1024 [MB] (10 MBps) [2024-12-16T19:24:12.015Z] Copying: 205/1024 [MB] (15 MBps) [2024-12-16T19:24:13.400Z] Copying: 227/1024 [MB] (22 MBps) [2024-12-16T19:24:14.343Z] Copying: 245/1024 [MB] (17 MBps) [2024-12-16T19:24:15.287Z] Copying: 266/1024 [MB] (21 MBps) [2024-12-16T19:24:16.232Z] Copying: 282/1024 [MB] (15 MBps) [2024-12-16T19:24:17.176Z] Copying: 304/1024 [MB] (22 MBps) [2024-12-16T19:24:18.120Z] Copying: 322/1024 [MB] (17 MBps) [2024-12-16T19:24:19.063Z] Copying: 339/1024 [MB] (17 MBps) [2024-12-16T19:24:20.007Z] Copying: 354/1024 [MB] (15 MBps) [2024-12-16T19:24:21.393Z] Copying: 369/1024 [MB] (15 MBps) [2024-12-16T19:24:22.336Z] Copying: 387/1024 [MB] (17 MBps) [2024-12-16T19:24:23.280Z] Copying: 400/1024 [MB] (13 MBps) [2024-12-16T19:24:24.225Z] Copying: 411/1024 [MB] (10 MBps) [2024-12-16T19:24:25.168Z] Copying: 421/1024 [MB] (10 MBps) [2024-12-16T19:24:26.111Z] Copying: 432/1024 [MB] (10 MBps) [2024-12-16T19:24:27.055Z] Copying: 446/1024 [MB] (14 MBps) [2024-12-16T19:24:28.000Z] Copying: 463/1024 [MB] (16 MBps) [2024-12-16T19:24:29.388Z] Copying: 481/1024 [MB] (17 MBps) [2024-12-16T19:24:30.339Z] Copying: 495/1024 [MB] (13 MBps) [2024-12-16T19:24:31.279Z] Copying: 509/1024 [MB] (14 MBps) [2024-12-16T19:24:32.223Z] Copying: 522/1024 [MB] (12 MBps) [2024-12-16T19:24:33.168Z] Copying: 537/1024 [MB] (15 MBps) [2024-12-16T19:24:34.112Z] Copying: 556/1024 [MB] (18 MBps) [2024-12-16T19:24:35.054Z] Copying: 570/1024 [MB] (14 MBps) [2024-12-16T19:24:36.003Z] Copying: 586/1024 [MB] (16 MBps) [2024-12-16T19:24:37.015Z] Copying: 603/1024 [MB] (16 MBps) [2024-12-16T19:24:38.402Z] Copying: 629/1024 [MB] (25 MBps) [2024-12-16T19:24:39.346Z] Copying: 643/1024 [MB] (14 MBps) [2024-12-16T19:24:40.328Z] Copying: 656/1024 [MB] (12 MBps) [2024-12-16T19:24:41.269Z] Copying: 669/1024 [MB] (13 MBps) [2024-12-16T19:24:42.211Z] Copying: 690/1024 [MB] (21 MBps) [2024-12-16T19:24:43.155Z] Copying: 711/1024 [MB] (20 MBps) [2024-12-16T19:24:44.099Z] Copying: 726/1024 [MB] (14 MBps) [2024-12-16T19:24:45.043Z] Copying: 740/1024 [MB] (14 MBps) [2024-12-16T19:24:46.428Z] Copying: 750/1024 [MB] (10 MBps) [2024-12-16T19:24:47.001Z] Copying: 761/1024 [MB] (10 MBps) [2024-12-16T19:24:48.388Z] Copying: 772/1024 [MB] (10 MBps) [2024-12-16T19:24:49.332Z] Copying: 783/1024 [MB] (10 MBps) [2024-12-16T19:24:50.275Z] Copying: 794/1024 [MB] (11 MBps) [2024-12-16T19:24:51.218Z] Copying: 805/1024 [MB] (10 MBps) [2024-12-16T19:24:52.163Z] Copying: 815/1024 [MB] (10 MBps) [2024-12-16T19:24:53.108Z] Copying: 828/1024 [MB] (13 MBps) [2024-12-16T19:24:54.053Z] Copying: 847/1024 [MB] (18 MBps) [2024-12-16T19:24:54.996Z] Copying: 860/1024 [MB] (13 MBps) [2024-12-16T19:24:56.381Z] Copying: 871/1024 [MB] (10 MBps) [2024-12-16T19:24:57.323Z] Copying: 883/1024 [MB] (12 MBps) [2024-12-16T19:24:58.267Z] Copying: 903/1024 [MB] (20 MBps) [2024-12-16T19:24:59.208Z] Copying: 917/1024 [MB] (13 MBps) [2024-12-16T19:25:00.150Z] Copying: 927/1024 [MB] (10 MBps) [2024-12-16T19:25:01.093Z] Copying: 938/1024 [MB] (10 MBps) [2024-12-16T19:25:02.037Z] Copying: 954/1024 [MB] (15 MBps) [2024-12-16T19:25:03.000Z] Copying: 965/1024 [MB] (11 MBps) [2024-12-16T19:25:04.386Z] Copying: 982/1024 [MB] (16 MBps) [2024-12-16T19:25:05.329Z] Copying: 996/1024 [MB] (13 MBps) [2024-12-16T19:25:05.900Z] Copying: 1013/1024 [MB] (17 MBps) [2024-12-16T19:25:06.161Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-16 19:25:06.042916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.807 [2024-12-16 19:25:06.043033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:21.807 [2024-12-16 19:25:06.043070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:21.807 [2024-12-16 19:25:06.043093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.807 [2024-12-16 19:25:06.043153] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:21.807 [2024-12-16 19:25:06.050778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.807 [2024-12-16 19:25:06.050856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:21.807 [2024-12-16 19:25:06.050883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.555 ms 00:23:21.807 [2024-12-16 19:25:06.050904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.807 [2024-12-16 19:25:06.051566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.807 [2024-12-16 19:25:06.051606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:21.807 [2024-12-16 19:25:06.051631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:23:21.807 [2024-12-16 19:25:06.051652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.807 [2024-12-16 19:25:06.057290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.807 [2024-12-16 19:25:06.057426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:21.807 [2024-12-16 19:25:06.057441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.601 ms 00:23:21.807 [2024-12-16 19:25:06.057453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.807 [2024-12-16 19:25:06.063569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.807 [2024-12-16 19:25:06.063593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:21.807 [2024-12-16 19:25:06.063603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.095 ms 00:23:21.807 [2024-12-16 19:25:06.063610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.807 [2024-12-16 19:25:06.087234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.807 [2024-12-16 19:25:06.087265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:21.807 [2024-12-16 19:25:06.087275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.577 ms 00:23:21.807 [2024-12-16 19:25:06.087284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.807 [2024-12-16 19:25:06.100890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.807 [2024-12-16 19:25:06.101015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:21.807 [2024-12-16 19:25:06.101032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.576 ms 00:23:21.807 [2024-12-16 19:25:06.101040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.807 [2024-12-16 19:25:06.101168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.807 [2024-12-16 19:25:06.101194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:21.807 [2024-12-16 19:25:06.101203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:23:21.807 [2024-12-16 19:25:06.101210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.808 [2024-12-16 19:25:06.124128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.808 [2024-12-16 19:25:06.124256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:21.808 [2024-12-16 19:25:06.124272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.904 ms 00:23:21.808 [2024-12-16 19:25:06.124279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.808 [2024-12-16 19:25:06.149858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.808 [2024-12-16 19:25:06.149891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:21.808 [2024-12-16 19:25:06.149902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.470 ms 00:23:21.808 [2024-12-16 19:25:06.149909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.070 [2024-12-16 19:25:06.173885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.070 [2024-12-16 19:25:06.173919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:22.070 [2024-12-16 19:25:06.173928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.942 ms 00:23:22.070 [2024-12-16 19:25:06.173935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.070 [2024-12-16 19:25:06.197519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.070 [2024-12-16 19:25:06.197568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:22.070 [2024-12-16 19:25:06.197579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.528 ms 00:23:22.070 [2024-12-16 19:25:06.197586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.070 [2024-12-16 19:25:06.197619] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:22.070 [2024-12-16 19:25:06.197639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.197994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.198001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.198009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.198016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.198023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.198030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.198038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.198046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.198054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.198061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.198068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.198076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.198085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:22.070 [2024-12-16 19:25:06.198092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:22.071 [2024-12-16 19:25:06.198422] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:22.071 [2024-12-16 19:25:06.198430] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2ff538f2-e73b-43af-beae-0346febb4f7c 00:23:22.071 [2024-12-16 19:25:06.198455] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:22.071 [2024-12-16 19:25:06.198463] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:22.071 [2024-12-16 19:25:06.198469] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:22.071 [2024-12-16 19:25:06.198477] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:22.071 [2024-12-16 19:25:06.198491] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:22.071 [2024-12-16 19:25:06.198498] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:22.071 [2024-12-16 19:25:06.198506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:22.071 [2024-12-16 19:25:06.198512] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:22.071 [2024-12-16 19:25:06.198518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:22.071 [2024-12-16 19:25:06.198526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.071 [2024-12-16 19:25:06.198533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:22.071 [2024-12-16 19:25:06.198542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:23:22.071 [2024-12-16 19:25:06.198551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.071 [2024-12-16 19:25:06.210959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.071 [2024-12-16 19:25:06.210988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:22.071 [2024-12-16 19:25:06.210999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.390 ms 00:23:22.071 [2024-12-16 19:25:06.211006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.071 [2024-12-16 19:25:06.211385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.071 [2024-12-16 19:25:06.211396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:22.071 [2024-12-16 19:25:06.211409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:23:22.071 [2024-12-16 19:25:06.211416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.071 [2024-12-16 19:25:06.244734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.071 [2024-12-16 19:25:06.244884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:22.071 [2024-12-16 19:25:06.244902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.071 [2024-12-16 19:25:06.244911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.071 [2024-12-16 19:25:06.244970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.071 [2024-12-16 19:25:06.244979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:22.071 [2024-12-16 19:25:06.244992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.071 [2024-12-16 19:25:06.245001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.071 [2024-12-16 19:25:06.245052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.071 [2024-12-16 19:25:06.245062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:22.071 [2024-12-16 19:25:06.245071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.071 [2024-12-16 19:25:06.245079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.071 [2024-12-16 19:25:06.245095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.071 [2024-12-16 19:25:06.245103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:22.071 [2024-12-16 19:25:06.245112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.071 [2024-12-16 19:25:06.245123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.071 [2024-12-16 19:25:06.324496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.071 [2024-12-16 19:25:06.324537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:22.071 [2024-12-16 19:25:06.324549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.071 [2024-12-16 19:25:06.324557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.071 [2024-12-16 19:25:06.389401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.071 [2024-12-16 19:25:06.389445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:22.071 [2024-12-16 19:25:06.389461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.071 [2024-12-16 19:25:06.389469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.071 [2024-12-16 19:25:06.389538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.071 [2024-12-16 19:25:06.389548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:22.071 [2024-12-16 19:25:06.389556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.071 [2024-12-16 19:25:06.389563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.071 [2024-12-16 19:25:06.389598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.071 [2024-12-16 19:25:06.389607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:22.071 [2024-12-16 19:25:06.389615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.071 [2024-12-16 19:25:06.389622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.071 [2024-12-16 19:25:06.389711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.071 [2024-12-16 19:25:06.389720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:22.071 [2024-12-16 19:25:06.389729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.071 [2024-12-16 19:25:06.389737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.071 [2024-12-16 19:25:06.389771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.071 [2024-12-16 19:25:06.389780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:22.071 [2024-12-16 19:25:06.389788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.071 [2024-12-16 19:25:06.389795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.071 [2024-12-16 19:25:06.389833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.071 [2024-12-16 19:25:06.389842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:22.071 [2024-12-16 19:25:06.389851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.071 [2024-12-16 19:25:06.389858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.072 [2024-12-16 19:25:06.389896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.072 [2024-12-16 19:25:06.389906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:22.072 [2024-12-16 19:25:06.389914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.072 [2024-12-16 19:25:06.389922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.072 [2024-12-16 19:25:06.390039] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 347.143 ms, result 0 00:23:23.013 00:23:23.013 00:23:23.013 19:25:07 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:24.925 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:24.926 19:25:09 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:24.926 [2024-12-16 19:25:09.140669] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:24.926 [2024-12-16 19:25:09.140786] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80553 ] 00:23:25.186 [2024-12-16 19:25:09.291949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.186 [2024-12-16 19:25:09.392694] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.447 [2024-12-16 19:25:09.684515] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:25.447 [2024-12-16 19:25:09.684610] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:25.710 [2024-12-16 19:25:09.845332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.710 [2024-12-16 19:25:09.845398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:25.710 [2024-12-16 19:25:09.845413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:25.710 [2024-12-16 19:25:09.845422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.710 [2024-12-16 19:25:09.845477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.710 [2024-12-16 19:25:09.845490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:25.710 [2024-12-16 19:25:09.845499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:25.710 [2024-12-16 19:25:09.845507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.710 [2024-12-16 19:25:09.845528] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:25.710 [2024-12-16 19:25:09.846310] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:25.710 [2024-12-16 19:25:09.846331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.710 [2024-12-16 19:25:09.846340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:25.710 [2024-12-16 19:25:09.846349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:23:25.710 [2024-12-16 19:25:09.846357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.710 [2024-12-16 19:25:09.848127] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:25.710 [2024-12-16 19:25:09.862292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.710 [2024-12-16 19:25:09.862511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:25.710 [2024-12-16 19:25:09.862534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.167 ms 00:23:25.710 [2024-12-16 19:25:09.862543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.710 [2024-12-16 19:25:09.862615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.710 [2024-12-16 19:25:09.862626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:25.710 [2024-12-16 19:25:09.862635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:25.710 [2024-12-16 19:25:09.862643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.710 [2024-12-16 19:25:09.870605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.710 [2024-12-16 19:25:09.870648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:25.710 [2024-12-16 19:25:09.870659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.884 ms 00:23:25.710 [2024-12-16 19:25:09.870672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.710 [2024-12-16 19:25:09.870749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.710 [2024-12-16 19:25:09.870759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:25.710 [2024-12-16 19:25:09.870768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:23:25.710 [2024-12-16 19:25:09.870776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.710 [2024-12-16 19:25:09.870819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.710 [2024-12-16 19:25:09.870829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:25.710 [2024-12-16 19:25:09.870838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:25.710 [2024-12-16 19:25:09.870846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.710 [2024-12-16 19:25:09.870873] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:25.710 [2024-12-16 19:25:09.874947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.710 [2024-12-16 19:25:09.874989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:25.710 [2024-12-16 19:25:09.875002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.081 ms 00:23:25.710 [2024-12-16 19:25:09.875010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.710 [2024-12-16 19:25:09.875047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.710 [2024-12-16 19:25:09.875056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:25.710 [2024-12-16 19:25:09.875064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:25.710 [2024-12-16 19:25:09.875072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.710 [2024-12-16 19:25:09.875122] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:25.710 [2024-12-16 19:25:09.875146] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:25.710 [2024-12-16 19:25:09.875204] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:25.710 [2024-12-16 19:25:09.875224] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:25.710 [2024-12-16 19:25:09.875331] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:25.710 [2024-12-16 19:25:09.875343] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:25.710 [2024-12-16 19:25:09.875354] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:25.710 [2024-12-16 19:25:09.875367] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:25.710 [2024-12-16 19:25:09.875376] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:25.710 [2024-12-16 19:25:09.875385] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:25.710 [2024-12-16 19:25:09.875393] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:25.710 [2024-12-16 19:25:09.875401] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:25.710 [2024-12-16 19:25:09.875413] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:25.710 [2024-12-16 19:25:09.875421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.710 [2024-12-16 19:25:09.875429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:25.710 [2024-12-16 19:25:09.875439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:23:25.710 [2024-12-16 19:25:09.875446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.710 [2024-12-16 19:25:09.875537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.710 [2024-12-16 19:25:09.875546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:25.710 [2024-12-16 19:25:09.875554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:23:25.710 [2024-12-16 19:25:09.875562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.710 [2024-12-16 19:25:09.875666] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:25.710 [2024-12-16 19:25:09.875678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:25.710 [2024-12-16 19:25:09.875687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:25.710 [2024-12-16 19:25:09.875695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.710 [2024-12-16 19:25:09.875704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:25.710 [2024-12-16 19:25:09.875711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:25.710 [2024-12-16 19:25:09.875718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:25.710 [2024-12-16 19:25:09.875725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:25.710 [2024-12-16 19:25:09.875734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:25.710 [2024-12-16 19:25:09.875741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:25.710 [2024-12-16 19:25:09.875748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:25.710 [2024-12-16 19:25:09.875754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:25.710 [2024-12-16 19:25:09.875762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:25.710 [2024-12-16 19:25:09.875777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:25.710 [2024-12-16 19:25:09.875785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:25.710 [2024-12-16 19:25:09.875792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.710 [2024-12-16 19:25:09.875803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:25.710 [2024-12-16 19:25:09.875811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:25.710 [2024-12-16 19:25:09.875818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.711 [2024-12-16 19:25:09.875826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:25.711 [2024-12-16 19:25:09.875834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:25.711 [2024-12-16 19:25:09.875841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:25.711 [2024-12-16 19:25:09.875848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:25.711 [2024-12-16 19:25:09.875855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:25.711 [2024-12-16 19:25:09.875861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:25.711 [2024-12-16 19:25:09.875867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:25.711 [2024-12-16 19:25:09.875874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:25.711 [2024-12-16 19:25:09.875881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:25.711 [2024-12-16 19:25:09.875887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:25.711 [2024-12-16 19:25:09.875895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:25.711 [2024-12-16 19:25:09.875902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:25.711 [2024-12-16 19:25:09.875909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:25.711 [2024-12-16 19:25:09.875916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:25.711 [2024-12-16 19:25:09.875923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:25.711 [2024-12-16 19:25:09.875930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:25.711 [2024-12-16 19:25:09.875937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:25.711 [2024-12-16 19:25:09.875945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:25.711 [2024-12-16 19:25:09.875952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:25.711 [2024-12-16 19:25:09.875959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:25.711 [2024-12-16 19:25:09.875966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.711 [2024-12-16 19:25:09.875974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:25.711 [2024-12-16 19:25:09.875982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:25.711 [2024-12-16 19:25:09.875990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.711 [2024-12-16 19:25:09.875997] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:25.711 [2024-12-16 19:25:09.876006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:25.711 [2024-12-16 19:25:09.876014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:25.711 [2024-12-16 19:25:09.876022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.711 [2024-12-16 19:25:09.876031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:25.711 [2024-12-16 19:25:09.876040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:25.711 [2024-12-16 19:25:09.876048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:25.711 [2024-12-16 19:25:09.876056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:25.711 [2024-12-16 19:25:09.876062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:25.711 [2024-12-16 19:25:09.876070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:25.711 [2024-12-16 19:25:09.876079] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:25.711 [2024-12-16 19:25:09.876090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:25.711 [2024-12-16 19:25:09.876102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:25.711 [2024-12-16 19:25:09.876110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:25.711 [2024-12-16 19:25:09.876118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:25.711 [2024-12-16 19:25:09.876126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:25.711 [2024-12-16 19:25:09.876134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:25.711 [2024-12-16 19:25:09.876142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:25.711 [2024-12-16 19:25:09.876151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:25.711 [2024-12-16 19:25:09.876159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:25.711 [2024-12-16 19:25:09.876167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:25.711 [2024-12-16 19:25:09.876191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:25.711 [2024-12-16 19:25:09.876198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:25.711 [2024-12-16 19:25:09.876205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:25.711 [2024-12-16 19:25:09.876214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:25.711 [2024-12-16 19:25:09.876222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:25.711 [2024-12-16 19:25:09.876230] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:25.711 [2024-12-16 19:25:09.876239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:25.711 [2024-12-16 19:25:09.876247] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:25.711 [2024-12-16 19:25:09.876255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:25.711 [2024-12-16 19:25:09.876262] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:25.711 [2024-12-16 19:25:09.876271] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:25.711 [2024-12-16 19:25:09.876279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.711 [2024-12-16 19:25:09.876286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:25.711 [2024-12-16 19:25:09.876294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:23:25.711 [2024-12-16 19:25:09.876302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.711 [2024-12-16 19:25:09.908025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.711 [2024-12-16 19:25:09.908078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:25.711 [2024-12-16 19:25:09.908090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.677 ms 00:23:25.711 [2024-12-16 19:25:09.908103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.711 [2024-12-16 19:25:09.908211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.711 [2024-12-16 19:25:09.908221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:25.711 [2024-12-16 19:25:09.908230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:23:25.711 [2024-12-16 19:25:09.908238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.711 [2024-12-16 19:25:09.954117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.711 [2024-12-16 19:25:09.954204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:25.711 [2024-12-16 19:25:09.954220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.819 ms 00:23:25.711 [2024-12-16 19:25:09.954229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.711 [2024-12-16 19:25:09.954278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.711 [2024-12-16 19:25:09.954288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:25.711 [2024-12-16 19:25:09.954302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:25.711 [2024-12-16 19:25:09.954310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.711 [2024-12-16 19:25:09.954927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.711 [2024-12-16 19:25:09.954972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:25.711 [2024-12-16 19:25:09.954984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:23:25.711 [2024-12-16 19:25:09.954992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.711 [2024-12-16 19:25:09.955151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.711 [2024-12-16 19:25:09.955161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:25.711 [2024-12-16 19:25:09.955189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:23:25.711 [2024-12-16 19:25:09.955198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.711 [2024-12-16 19:25:09.970731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.711 [2024-12-16 19:25:09.970934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:25.711 [2024-12-16 19:25:09.970955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.512 ms 00:23:25.711 [2024-12-16 19:25:09.970965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.711 [2024-12-16 19:25:09.985358] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:25.711 [2024-12-16 19:25:09.985542] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:25.711 [2024-12-16 19:25:09.985561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.711 [2024-12-16 19:25:09.985571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:25.711 [2024-12-16 19:25:09.985581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.475 ms 00:23:25.711 [2024-12-16 19:25:09.985588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.711 [2024-12-16 19:25:10.011807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.711 [2024-12-16 19:25:10.011861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:25.711 [2024-12-16 19:25:10.011874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.064 ms 00:23:25.711 [2024-12-16 19:25:10.011881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.711 [2024-12-16 19:25:10.024676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.711 [2024-12-16 19:25:10.024725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:25.711 [2024-12-16 19:25:10.024736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.732 ms 00:23:25.711 [2024-12-16 19:25:10.024743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.711 [2024-12-16 19:25:10.037043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.712 [2024-12-16 19:25:10.037089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:25.712 [2024-12-16 19:25:10.037101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.253 ms 00:23:25.712 [2024-12-16 19:25:10.037108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.712 [2024-12-16 19:25:10.037764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.712 [2024-12-16 19:25:10.037808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:25.712 [2024-12-16 19:25:10.037821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:23:25.712 [2024-12-16 19:25:10.037829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.972 [2024-12-16 19:25:10.103242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.972 [2024-12-16 19:25:10.103477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:25.972 [2024-12-16 19:25:10.103508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.392 ms 00:23:25.972 [2024-12-16 19:25:10.103517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.972 [2024-12-16 19:25:10.114431] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:25.972 [2024-12-16 19:25:10.117524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.972 [2024-12-16 19:25:10.117567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:25.972 [2024-12-16 19:25:10.117579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.923 ms 00:23:25.972 [2024-12-16 19:25:10.117588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.972 [2024-12-16 19:25:10.117671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.972 [2024-12-16 19:25:10.117683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:25.972 [2024-12-16 19:25:10.117693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:25.972 [2024-12-16 19:25:10.117704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.972 [2024-12-16 19:25:10.117775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.972 [2024-12-16 19:25:10.117787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:25.972 [2024-12-16 19:25:10.117795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:25.972 [2024-12-16 19:25:10.117804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.972 [2024-12-16 19:25:10.117824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.972 [2024-12-16 19:25:10.117834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:25.972 [2024-12-16 19:25:10.117843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:25.972 [2024-12-16 19:25:10.117852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.972 [2024-12-16 19:25:10.117890] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:25.972 [2024-12-16 19:25:10.117902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.972 [2024-12-16 19:25:10.117910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:25.972 [2024-12-16 19:25:10.117920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:25.972 [2024-12-16 19:25:10.117928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.972 [2024-12-16 19:25:10.143633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.972 [2024-12-16 19:25:10.143811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:25.972 [2024-12-16 19:25:10.143882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.685 ms 00:23:25.972 [2024-12-16 19:25:10.143905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.972 [2024-12-16 19:25:10.144064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.972 [2024-12-16 19:25:10.144094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:25.972 [2024-12-16 19:25:10.144116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:25.972 [2024-12-16 19:25:10.144191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.972 [2024-12-16 19:25:10.145878] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 300.067 ms, result 0 00:23:26.917  [2024-12-16T19:25:12.215Z] Copying: 14/1024 [MB] (14 MBps) [2024-12-16T19:25:13.160Z] Copying: 34/1024 [MB] (20 MBps) [2024-12-16T19:25:14.547Z] Copying: 50/1024 [MB] (16 MBps) [2024-12-16T19:25:15.490Z] Copying: 72/1024 [MB] (21 MBps) [2024-12-16T19:25:16.432Z] Copying: 104/1024 [MB] (32 MBps) [2024-12-16T19:25:17.376Z] Copying: 128/1024 [MB] (24 MBps) [2024-12-16T19:25:18.319Z] Copying: 140/1024 [MB] (12 MBps) [2024-12-16T19:25:19.262Z] Copying: 174/1024 [MB] (33 MBps) [2024-12-16T19:25:20.207Z] Copying: 193/1024 [MB] (19 MBps) [2024-12-16T19:25:21.595Z] Copying: 209/1024 [MB] (16 MBps) [2024-12-16T19:25:22.167Z] Copying: 229/1024 [MB] (19 MBps) [2024-12-16T19:25:23.555Z] Copying: 248/1024 [MB] (18 MBps) [2024-12-16T19:25:24.498Z] Copying: 263/1024 [MB] (15 MBps) [2024-12-16T19:25:25.441Z] Copying: 280/1024 [MB] (16 MBps) [2024-12-16T19:25:26.384Z] Copying: 300/1024 [MB] (20 MBps) [2024-12-16T19:25:27.329Z] Copying: 316/1024 [MB] (15 MBps) [2024-12-16T19:25:28.301Z] Copying: 334/1024 [MB] (17 MBps) [2024-12-16T19:25:29.261Z] Copying: 352/1024 [MB] (17 MBps) [2024-12-16T19:25:30.206Z] Copying: 364/1024 [MB] (12 MBps) [2024-12-16T19:25:31.593Z] Copying: 380/1024 [MB] (15 MBps) [2024-12-16T19:25:32.165Z] Copying: 392/1024 [MB] (12 MBps) [2024-12-16T19:25:33.557Z] Copying: 407/1024 [MB] (14 MBps) [2024-12-16T19:25:34.501Z] Copying: 427/1024 [MB] (20 MBps) [2024-12-16T19:25:35.444Z] Copying: 446/1024 [MB] (19 MBps) [2024-12-16T19:25:36.387Z] Copying: 459/1024 [MB] (13 MBps) [2024-12-16T19:25:37.343Z] Copying: 486/1024 [MB] (26 MBps) [2024-12-16T19:25:38.287Z] Copying: 498/1024 [MB] (12 MBps) [2024-12-16T19:25:39.230Z] Copying: 511/1024 [MB] (13 MBps) [2024-12-16T19:25:40.175Z] Copying: 538/1024 [MB] (26 MBps) [2024-12-16T19:25:41.564Z] Copying: 553/1024 [MB] (14 MBps) [2024-12-16T19:25:42.509Z] Copying: 577048/1048576 [kB] (10184 kBps) [2024-12-16T19:25:43.451Z] Copying: 589/1024 [MB] (26 MBps) [2024-12-16T19:25:44.395Z] Copying: 616/1024 [MB] (26 MBps) [2024-12-16T19:25:45.339Z] Copying: 626/1024 [MB] (10 MBps) [2024-12-16T19:25:46.280Z] Copying: 640/1024 [MB] (13 MBps) [2024-12-16T19:25:47.223Z] Copying: 680/1024 [MB] (40 MBps) [2024-12-16T19:25:48.165Z] Copying: 720/1024 [MB] (40 MBps) [2024-12-16T19:25:49.551Z] Copying: 740/1024 [MB] (20 MBps) [2024-12-16T19:25:50.493Z] Copying: 762/1024 [MB] (21 MBps) [2024-12-16T19:25:51.436Z] Copying: 786/1024 [MB] (23 MBps) [2024-12-16T19:25:52.378Z] Copying: 814/1024 [MB] (28 MBps) [2024-12-16T19:25:53.321Z] Copying: 837/1024 [MB] (23 MBps) [2024-12-16T19:25:54.270Z] Copying: 862/1024 [MB] (24 MBps) [2024-12-16T19:25:55.302Z] Copying: 884/1024 [MB] (21 MBps) [2024-12-16T19:25:56.252Z] Copying: 900/1024 [MB] (16 MBps) [2024-12-16T19:25:57.195Z] Copying: 919/1024 [MB] (18 MBps) [2024-12-16T19:25:58.581Z] Copying: 941/1024 [MB] (22 MBps) [2024-12-16T19:25:59.525Z] Copying: 987/1024 [MB] (45 MBps) [2024-12-16T19:26:00.099Z] Copying: 1023/1024 [MB] (36 MBps) [2024-12-16T19:26:00.099Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-12-16 19:25:59.830788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.745 [2024-12-16 19:25:59.830873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:15.745 [2024-12-16 19:25:59.830899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:15.745 [2024-12-16 19:25:59.830909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.745 [2024-12-16 19:25:59.832238] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:15.745 [2024-12-16 19:25:59.835763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.745 [2024-12-16 19:25:59.835812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:15.745 [2024-12-16 19:25:59.835824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.488 ms 00:24:15.745 [2024-12-16 19:25:59.835833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.745 [2024-12-16 19:25:59.860855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.745 [2024-12-16 19:25:59.860951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:15.745 [2024-12-16 19:25:59.860973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.298 ms 00:24:15.745 [2024-12-16 19:25:59.861000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.745 [2024-12-16 19:25:59.889345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.745 [2024-12-16 19:25:59.889397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:15.745 [2024-12-16 19:25:59.889410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.320 ms 00:24:15.745 [2024-12-16 19:25:59.889420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.745 [2024-12-16 19:25:59.895576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.745 [2024-12-16 19:25:59.895617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:15.745 [2024-12-16 19:25:59.895629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.115 ms 00:24:15.745 [2024-12-16 19:25:59.895645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.745 [2024-12-16 19:25:59.922789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.745 [2024-12-16 19:25:59.922838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:15.745 [2024-12-16 19:25:59.922851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.105 ms 00:24:15.745 [2024-12-16 19:25:59.922860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.745 [2024-12-16 19:25:59.938697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.745 [2024-12-16 19:25:59.938903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:15.745 [2024-12-16 19:25:59.938927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.791 ms 00:24:15.745 [2024-12-16 19:25:59.938936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.009 [2024-12-16 19:26:00.266312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.009 [2024-12-16 19:26:00.266379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:16.009 [2024-12-16 19:26:00.266393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 327.325 ms 00:24:16.009 [2024-12-16 19:26:00.266402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.009 [2024-12-16 19:26:00.292829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.009 [2024-12-16 19:26:00.292877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:16.009 [2024-12-16 19:26:00.292889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.412 ms 00:24:16.009 [2024-12-16 19:26:00.292896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.009 [2024-12-16 19:26:00.318607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.009 [2024-12-16 19:26:00.318654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:16.009 [2024-12-16 19:26:00.318667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.663 ms 00:24:16.009 [2024-12-16 19:26:00.318674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.009 [2024-12-16 19:26:00.343424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.009 [2024-12-16 19:26:00.343470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:16.009 [2024-12-16 19:26:00.343482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.704 ms 00:24:16.009 [2024-12-16 19:26:00.343490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.271 [2024-12-16 19:26:00.368166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.271 [2024-12-16 19:26:00.368385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:16.271 [2024-12-16 19:26:00.368406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.603 ms 00:24:16.271 [2024-12-16 19:26:00.368414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.271 [2024-12-16 19:26:00.368517] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:16.271 [2024-12-16 19:26:00.368549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 109312 / 261120 wr_cnt: 1 state: open 00:24:16.271 [2024-12-16 19:26:00.368562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:16.271 [2024-12-16 19:26:00.368570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:16.271 [2024-12-16 19:26:00.368580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:16.271 [2024-12-16 19:26:00.368588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:16.271 [2024-12-16 19:26:00.368597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:16.271 [2024-12-16 19:26:00.368605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:16.271 [2024-12-16 19:26:00.368613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:16.271 [2024-12-16 19:26:00.368621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:16.271 [2024-12-16 19:26:00.368629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:16.271 [2024-12-16 19:26:00.368636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:16.271 [2024-12-16 19:26:00.368644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:16.271 [2024-12-16 19:26:00.368652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:16.271 [2024-12-16 19:26:00.368659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:16.271 [2024-12-16 19:26:00.368667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:16.271 [2024-12-16 19:26:00.368674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:16.271 [2024-12-16 19:26:00.368681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:16.271 [2024-12-16 19:26:00.368688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:16.271 [2024-12-16 19:26:00.368696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:16.271 [2024-12-16 19:26:00.368703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:16.271 [2024-12-16 19:26:00.368710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.368992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:16.272 [2024-12-16 19:26:00.369400] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:16.272 [2024-12-16 19:26:00.369409] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2ff538f2-e73b-43af-beae-0346febb4f7c 00:24:16.272 [2024-12-16 19:26:00.369417] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 109312 00:24:16.272 [2024-12-16 19:26:00.369425] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 110272 00:24:16.272 [2024-12-16 19:26:00.369432] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 109312 00:24:16.272 [2024-12-16 19:26:00.369440] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0088 00:24:16.272 [2024-12-16 19:26:00.369457] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:16.272 [2024-12-16 19:26:00.369465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:16.272 [2024-12-16 19:26:00.369473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:16.272 [2024-12-16 19:26:00.369479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:16.272 [2024-12-16 19:26:00.369487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:16.272 [2024-12-16 19:26:00.369495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.272 [2024-12-16 19:26:00.369504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:16.272 [2024-12-16 19:26:00.369513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:24:16.272 [2024-12-16 19:26:00.369529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.272 [2024-12-16 19:26:00.383182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.272 [2024-12-16 19:26:00.383219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:16.273 [2024-12-16 19:26:00.383236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.623 ms 00:24:16.273 [2024-12-16 19:26:00.383245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.273 [2024-12-16 19:26:00.383657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.273 [2024-12-16 19:26:00.383682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:16.273 [2024-12-16 19:26:00.383693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:24:16.273 [2024-12-16 19:26:00.383700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.273 [2024-12-16 19:26:00.420723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.273 [2024-12-16 19:26:00.420767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:16.273 [2024-12-16 19:26:00.420778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.273 [2024-12-16 19:26:00.420787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.273 [2024-12-16 19:26:00.420851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.273 [2024-12-16 19:26:00.420860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:16.273 [2024-12-16 19:26:00.420869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.273 [2024-12-16 19:26:00.420877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.273 [2024-12-16 19:26:00.420945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.273 [2024-12-16 19:26:00.420955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:16.273 [2024-12-16 19:26:00.420969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.273 [2024-12-16 19:26:00.420977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.273 [2024-12-16 19:26:00.420992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.273 [2024-12-16 19:26:00.421001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:16.273 [2024-12-16 19:26:00.421009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.273 [2024-12-16 19:26:00.421018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.273 [2024-12-16 19:26:00.506004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.273 [2024-12-16 19:26:00.506259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:16.273 [2024-12-16 19:26:00.506284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.273 [2024-12-16 19:26:00.506293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.273 [2024-12-16 19:26:00.576666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.273 [2024-12-16 19:26:00.576721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:16.273 [2024-12-16 19:26:00.576734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.273 [2024-12-16 19:26:00.576743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.273 [2024-12-16 19:26:00.576825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.273 [2024-12-16 19:26:00.576835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:16.273 [2024-12-16 19:26:00.576845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.273 [2024-12-16 19:26:00.576861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.273 [2024-12-16 19:26:00.576902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.273 [2024-12-16 19:26:00.576912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:16.273 [2024-12-16 19:26:00.576921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.273 [2024-12-16 19:26:00.576931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.273 [2024-12-16 19:26:00.577030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.273 [2024-12-16 19:26:00.577041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:16.273 [2024-12-16 19:26:00.577049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.273 [2024-12-16 19:26:00.577062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.273 [2024-12-16 19:26:00.577094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.273 [2024-12-16 19:26:00.577104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:16.273 [2024-12-16 19:26:00.577113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.273 [2024-12-16 19:26:00.577121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.273 [2024-12-16 19:26:00.577163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.273 [2024-12-16 19:26:00.577213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:16.273 [2024-12-16 19:26:00.577223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.273 [2024-12-16 19:26:00.577232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.273 [2024-12-16 19:26:00.577286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.273 [2024-12-16 19:26:00.577297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:16.273 [2024-12-16 19:26:00.577305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.273 [2024-12-16 19:26:00.577313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.273 [2024-12-16 19:26:00.577450] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 749.401 ms, result 0 00:24:17.656 00:24:17.656 00:24:17.656 19:26:01 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:17.656 [2024-12-16 19:26:02.001912] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:17.656 [2024-12-16 19:26:02.002358] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81093 ] 00:24:17.917 [2024-12-16 19:26:02.169102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.178 [2024-12-16 19:26:02.293516] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.440 [2024-12-16 19:26:02.588557] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:18.440 [2024-12-16 19:26:02.588846] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:18.440 [2024-12-16 19:26:02.749813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.440 [2024-12-16 19:26:02.749879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:18.440 [2024-12-16 19:26:02.749895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:18.440 [2024-12-16 19:26:02.749903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.440 [2024-12-16 19:26:02.749962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.440 [2024-12-16 19:26:02.749976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:18.440 [2024-12-16 19:26:02.749985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:18.440 [2024-12-16 19:26:02.749993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.440 [2024-12-16 19:26:02.750014] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:18.440 [2024-12-16 19:26:02.750819] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:18.440 [2024-12-16 19:26:02.750843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.440 [2024-12-16 19:26:02.750853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:18.440 [2024-12-16 19:26:02.750863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.833 ms 00:24:18.440 [2024-12-16 19:26:02.750871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.440 [2024-12-16 19:26:02.752626] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:18.440 [2024-12-16 19:26:02.766623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.441 [2024-12-16 19:26:02.766824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:18.441 [2024-12-16 19:26:02.766848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.998 ms 00:24:18.441 [2024-12-16 19:26:02.766858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.441 [2024-12-16 19:26:02.766934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.441 [2024-12-16 19:26:02.766945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:18.441 [2024-12-16 19:26:02.766954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:18.441 [2024-12-16 19:26:02.766962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.441 [2024-12-16 19:26:02.775205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.441 [2024-12-16 19:26:02.775245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:18.441 [2024-12-16 19:26:02.775257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.159 ms 00:24:18.441 [2024-12-16 19:26:02.775271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.441 [2024-12-16 19:26:02.775352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.441 [2024-12-16 19:26:02.775361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:18.441 [2024-12-16 19:26:02.775369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:18.441 [2024-12-16 19:26:02.775377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.441 [2024-12-16 19:26:02.775424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.441 [2024-12-16 19:26:02.775434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:18.441 [2024-12-16 19:26:02.775442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:18.441 [2024-12-16 19:26:02.775451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.441 [2024-12-16 19:26:02.775478] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:18.441 [2024-12-16 19:26:02.779491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.441 [2024-12-16 19:26:02.779530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:18.441 [2024-12-16 19:26:02.779543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.019 ms 00:24:18.441 [2024-12-16 19:26:02.779551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.441 [2024-12-16 19:26:02.779590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.441 [2024-12-16 19:26:02.779600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:18.441 [2024-12-16 19:26:02.779609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:18.441 [2024-12-16 19:26:02.779617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.441 [2024-12-16 19:26:02.779669] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:18.441 [2024-12-16 19:26:02.779694] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:18.441 [2024-12-16 19:26:02.779731] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:18.441 [2024-12-16 19:26:02.779750] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:18.441 [2024-12-16 19:26:02.779856] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:18.441 [2024-12-16 19:26:02.779867] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:18.441 [2024-12-16 19:26:02.779878] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:18.441 [2024-12-16 19:26:02.779889] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:18.441 [2024-12-16 19:26:02.779899] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:18.441 [2024-12-16 19:26:02.779907] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:18.441 [2024-12-16 19:26:02.779915] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:18.441 [2024-12-16 19:26:02.779923] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:18.441 [2024-12-16 19:26:02.779934] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:18.441 [2024-12-16 19:26:02.779943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.441 [2024-12-16 19:26:02.779950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:18.441 [2024-12-16 19:26:02.779958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:24:18.441 [2024-12-16 19:26:02.779965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.441 [2024-12-16 19:26:02.780048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.441 [2024-12-16 19:26:02.780058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:18.441 [2024-12-16 19:26:02.780065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:18.441 [2024-12-16 19:26:02.780072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.441 [2024-12-16 19:26:02.780196] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:18.441 [2024-12-16 19:26:02.780209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:18.441 [2024-12-16 19:26:02.780218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:18.441 [2024-12-16 19:26:02.780226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:18.441 [2024-12-16 19:26:02.780234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:18.441 [2024-12-16 19:26:02.780242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:18.441 [2024-12-16 19:26:02.780249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:18.441 [2024-12-16 19:26:02.780257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:18.441 [2024-12-16 19:26:02.780264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:18.441 [2024-12-16 19:26:02.780271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:18.441 [2024-12-16 19:26:02.780278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:18.441 [2024-12-16 19:26:02.780286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:18.441 [2024-12-16 19:26:02.780293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:18.441 [2024-12-16 19:26:02.780309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:18.441 [2024-12-16 19:26:02.780317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:18.441 [2024-12-16 19:26:02.780324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:18.441 [2024-12-16 19:26:02.780331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:18.441 [2024-12-16 19:26:02.780338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:18.441 [2024-12-16 19:26:02.780346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:18.441 [2024-12-16 19:26:02.780353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:18.441 [2024-12-16 19:26:02.780360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:18.441 [2024-12-16 19:26:02.780367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:18.441 [2024-12-16 19:26:02.780374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:18.441 [2024-12-16 19:26:02.780381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:18.441 [2024-12-16 19:26:02.780388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:18.441 [2024-12-16 19:26:02.780394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:18.441 [2024-12-16 19:26:02.780401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:18.441 [2024-12-16 19:26:02.780408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:18.441 [2024-12-16 19:26:02.780415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:18.441 [2024-12-16 19:26:02.780422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:18.441 [2024-12-16 19:26:02.780429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:18.441 [2024-12-16 19:26:02.780436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:18.441 [2024-12-16 19:26:02.780443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:18.441 [2024-12-16 19:26:02.780450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:18.441 [2024-12-16 19:26:02.780456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:18.441 [2024-12-16 19:26:02.780462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:18.441 [2024-12-16 19:26:02.780468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:18.441 [2024-12-16 19:26:02.780475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:18.441 [2024-12-16 19:26:02.780482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:18.441 [2024-12-16 19:26:02.780488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:18.441 [2024-12-16 19:26:02.780495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:18.441 [2024-12-16 19:26:02.780501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:18.441 [2024-12-16 19:26:02.780508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:18.441 [2024-12-16 19:26:02.780515] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:18.441 [2024-12-16 19:26:02.780523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:18.441 [2024-12-16 19:26:02.780531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:18.441 [2024-12-16 19:26:02.780539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:18.441 [2024-12-16 19:26:02.780547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:18.441 [2024-12-16 19:26:02.780554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:18.441 [2024-12-16 19:26:02.780562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:18.441 [2024-12-16 19:26:02.780569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:18.441 [2024-12-16 19:26:02.780575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:18.441 [2024-12-16 19:26:02.780582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:18.441 [2024-12-16 19:26:02.780590] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:18.441 [2024-12-16 19:26:02.780599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:18.441 [2024-12-16 19:26:02.780610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:18.441 [2024-12-16 19:26:02.780617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:18.442 [2024-12-16 19:26:02.780624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:18.442 [2024-12-16 19:26:02.780632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:18.442 [2024-12-16 19:26:02.780639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:18.442 [2024-12-16 19:26:02.780646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:18.442 [2024-12-16 19:26:02.780653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:18.442 [2024-12-16 19:26:02.780660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:18.442 [2024-12-16 19:26:02.780667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:18.442 [2024-12-16 19:26:02.780675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:18.442 [2024-12-16 19:26:02.780682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:18.442 [2024-12-16 19:26:02.780690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:18.442 [2024-12-16 19:26:02.780697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:18.442 [2024-12-16 19:26:02.780704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:18.442 [2024-12-16 19:26:02.780711] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:18.442 [2024-12-16 19:26:02.780720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:18.442 [2024-12-16 19:26:02.780729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:18.442 [2024-12-16 19:26:02.780736] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:18.442 [2024-12-16 19:26:02.780743] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:18.442 [2024-12-16 19:26:02.780749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:18.442 [2024-12-16 19:26:02.780757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.442 [2024-12-16 19:26:02.780765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:18.442 [2024-12-16 19:26:02.780773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:24:18.442 [2024-12-16 19:26:02.780781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.704 [2024-12-16 19:26:02.812845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.704 [2024-12-16 19:26:02.813059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:18.704 [2024-12-16 19:26:02.813080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.018 ms 00:24:18.704 [2024-12-16 19:26:02.813096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.704 [2024-12-16 19:26:02.813215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.704 [2024-12-16 19:26:02.813225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:18.704 [2024-12-16 19:26:02.813235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:24:18.704 [2024-12-16 19:26:02.813243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.704 [2024-12-16 19:26:02.866311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.704 [2024-12-16 19:26:02.866535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:18.704 [2024-12-16 19:26:02.866558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.003 ms 00:24:18.704 [2024-12-16 19:26:02.866568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.704 [2024-12-16 19:26:02.866621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.704 [2024-12-16 19:26:02.866632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:18.704 [2024-12-16 19:26:02.866646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:18.704 [2024-12-16 19:26:02.866654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.704 [2024-12-16 19:26:02.867284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.704 [2024-12-16 19:26:02.867308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:18.704 [2024-12-16 19:26:02.867319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:24:18.704 [2024-12-16 19:26:02.867328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.704 [2024-12-16 19:26:02.867489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.704 [2024-12-16 19:26:02.867500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:18.704 [2024-12-16 19:26:02.867511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:24:18.704 [2024-12-16 19:26:02.867519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.704 [2024-12-16 19:26:02.883378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.704 [2024-12-16 19:26:02.883425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:18.704 [2024-12-16 19:26:02.883436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.837 ms 00:24:18.704 [2024-12-16 19:26:02.883445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.704 [2024-12-16 19:26:02.897875] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:18.704 [2024-12-16 19:26:02.898071] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:18.704 [2024-12-16 19:26:02.898091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.704 [2024-12-16 19:26:02.898100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:18.704 [2024-12-16 19:26:02.898110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.532 ms 00:24:18.704 [2024-12-16 19:26:02.898118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.704 [2024-12-16 19:26:02.923850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.704 [2024-12-16 19:26:02.923900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:18.704 [2024-12-16 19:26:02.923912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.686 ms 00:24:18.704 [2024-12-16 19:26:02.923921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.704 [2024-12-16 19:26:02.937016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.704 [2024-12-16 19:26:02.937206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:18.704 [2024-12-16 19:26:02.937228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.039 ms 00:24:18.704 [2024-12-16 19:26:02.937236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.704 [2024-12-16 19:26:02.950010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.704 [2024-12-16 19:26:02.950059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:18.704 [2024-12-16 19:26:02.950070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.732 ms 00:24:18.704 [2024-12-16 19:26:02.950078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.704 [2024-12-16 19:26:02.950760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.704 [2024-12-16 19:26:02.950798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:18.704 [2024-12-16 19:26:02.950813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:24:18.704 [2024-12-16 19:26:02.950821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.704 [2024-12-16 19:26:03.015909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.704 [2024-12-16 19:26:03.015974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:18.704 [2024-12-16 19:26:03.015997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.065 ms 00:24:18.704 [2024-12-16 19:26:03.016007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.704 [2024-12-16 19:26:03.027380] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:18.704 [2024-12-16 19:26:03.030358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.704 [2024-12-16 19:26:03.030548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:18.704 [2024-12-16 19:26:03.030568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.294 ms 00:24:18.704 [2024-12-16 19:26:03.030577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.704 [2024-12-16 19:26:03.030670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.704 [2024-12-16 19:26:03.030682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:18.704 [2024-12-16 19:26:03.030691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:18.704 [2024-12-16 19:26:03.030703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.704 [2024-12-16 19:26:03.032506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.704 [2024-12-16 19:26:03.032563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:18.704 [2024-12-16 19:26:03.032574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.763 ms 00:24:18.704 [2024-12-16 19:26:03.032583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.704 [2024-12-16 19:26:03.032612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.704 [2024-12-16 19:26:03.032621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:18.704 [2024-12-16 19:26:03.032630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:18.704 [2024-12-16 19:26:03.032638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.704 [2024-12-16 19:26:03.032683] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:18.704 [2024-12-16 19:26:03.032695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.704 [2024-12-16 19:26:03.032703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:18.704 [2024-12-16 19:26:03.032712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:18.704 [2024-12-16 19:26:03.032720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.966 [2024-12-16 19:26:03.058491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.966 [2024-12-16 19:26:03.058539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:18.966 [2024-12-16 19:26:03.058559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.751 ms 00:24:18.966 [2024-12-16 19:26:03.058568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.966 [2024-12-16 19:26:03.058654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.966 [2024-12-16 19:26:03.058665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:18.966 [2024-12-16 19:26:03.058674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:18.966 [2024-12-16 19:26:03.058683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.966 [2024-12-16 19:26:03.059935] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 309.630 ms, result 0 00:24:19.909  [2024-12-16T19:26:05.652Z] Copying: 8840/1048576 [kB] (8840 kBps) [2024-12-16T19:26:06.594Z] Copying: 26/1024 [MB] (17 MBps) [2024-12-16T19:26:07.539Z] Copying: 46/1024 [MB] (20 MBps) [2024-12-16T19:26:08.483Z] Copying: 64/1024 [MB] (17 MBps) [2024-12-16T19:26:09.425Z] Copying: 74/1024 [MB] (10 MBps) [2024-12-16T19:26:10.368Z] Copying: 91/1024 [MB] (17 MBps) [2024-12-16T19:26:11.311Z] Copying: 112/1024 [MB] (20 MBps) [2024-12-16T19:26:12.255Z] Copying: 132/1024 [MB] (19 MBps) [2024-12-16T19:26:13.641Z] Copying: 153/1024 [MB] (21 MBps) [2024-12-16T19:26:14.583Z] Copying: 165/1024 [MB] (11 MBps) [2024-12-16T19:26:15.527Z] Copying: 177/1024 [MB] (12 MBps) [2024-12-16T19:26:16.470Z] Copying: 199/1024 [MB] (22 MBps) [2024-12-16T19:26:17.413Z] Copying: 216/1024 [MB] (17 MBps) [2024-12-16T19:26:18.356Z] Copying: 234/1024 [MB] (18 MBps) [2024-12-16T19:26:19.299Z] Copying: 245/1024 [MB] (10 MBps) [2024-12-16T19:26:20.309Z] Copying: 256/1024 [MB] (10 MBps) [2024-12-16T19:26:21.252Z] Copying: 266/1024 [MB] (10 MBps) [2024-12-16T19:26:22.637Z] Copying: 277/1024 [MB] (10 MBps) [2024-12-16T19:26:23.581Z] Copying: 288/1024 [MB] (10 MBps) [2024-12-16T19:26:24.523Z] Copying: 306/1024 [MB] (18 MBps) [2024-12-16T19:26:25.465Z] Copying: 325/1024 [MB] (18 MBps) [2024-12-16T19:26:26.408Z] Copying: 344/1024 [MB] (19 MBps) [2024-12-16T19:26:27.351Z] Copying: 363/1024 [MB] (18 MBps) [2024-12-16T19:26:28.295Z] Copying: 378/1024 [MB] (15 MBps) [2024-12-16T19:26:29.681Z] Copying: 391/1024 [MB] (12 MBps) [2024-12-16T19:26:30.254Z] Copying: 412/1024 [MB] (21 MBps) [2024-12-16T19:26:31.640Z] Copying: 430/1024 [MB] (17 MBps) [2024-12-16T19:26:32.584Z] Copying: 443/1024 [MB] (13 MBps) [2024-12-16T19:26:33.528Z] Copying: 455/1024 [MB] (11 MBps) [2024-12-16T19:26:34.471Z] Copying: 467/1024 [MB] (11 MBps) [2024-12-16T19:26:35.412Z] Copying: 481/1024 [MB] (14 MBps) [2024-12-16T19:26:36.356Z] Copying: 492/1024 [MB] (11 MBps) [2024-12-16T19:26:37.300Z] Copying: 511/1024 [MB] (18 MBps) [2024-12-16T19:26:38.688Z] Copying: 521/1024 [MB] (10 MBps) [2024-12-16T19:26:39.260Z] Copying: 532/1024 [MB] (10 MBps) [2024-12-16T19:26:40.649Z] Copying: 542/1024 [MB] (10 MBps) [2024-12-16T19:26:41.592Z] Copying: 553/1024 [MB] (10 MBps) [2024-12-16T19:26:42.533Z] Copying: 564/1024 [MB] (10 MBps) [2024-12-16T19:26:43.477Z] Copying: 574/1024 [MB] (10 MBps) [2024-12-16T19:26:44.421Z] Copying: 585/1024 [MB] (10 MBps) [2024-12-16T19:26:45.363Z] Copying: 596/1024 [MB] (10 MBps) [2024-12-16T19:26:46.363Z] Copying: 612/1024 [MB] (16 MBps) [2024-12-16T19:26:47.349Z] Copying: 627/1024 [MB] (15 MBps) [2024-12-16T19:26:48.291Z] Copying: 638/1024 [MB] (10 MBps) [2024-12-16T19:26:49.678Z] Copying: 648/1024 [MB] (10 MBps) [2024-12-16T19:26:50.622Z] Copying: 663/1024 [MB] (14 MBps) [2024-12-16T19:26:51.564Z] Copying: 673/1024 [MB] (10 MBps) [2024-12-16T19:26:52.509Z] Copying: 690/1024 [MB] (16 MBps) [2024-12-16T19:26:53.454Z] Copying: 701/1024 [MB] (10 MBps) [2024-12-16T19:26:54.399Z] Copying: 718/1024 [MB] (17 MBps) [2024-12-16T19:26:55.342Z] Copying: 731/1024 [MB] (12 MBps) [2024-12-16T19:26:56.283Z] Copying: 743/1024 [MB] (12 MBps) [2024-12-16T19:26:57.670Z] Copying: 753/1024 [MB] (10 MBps) [2024-12-16T19:26:58.612Z] Copying: 765/1024 [MB] (11 MBps) [2024-12-16T19:26:59.556Z] Copying: 776/1024 [MB] (10 MBps) [2024-12-16T19:27:00.501Z] Copying: 791/1024 [MB] (14 MBps) [2024-12-16T19:27:01.443Z] Copying: 814/1024 [MB] (23 MBps) [2024-12-16T19:27:02.388Z] Copying: 844/1024 [MB] (30 MBps) [2024-12-16T19:27:03.333Z] Copying: 869/1024 [MB] (24 MBps) [2024-12-16T19:27:04.276Z] Copying: 895/1024 [MB] (26 MBps) [2024-12-16T19:27:05.661Z] Copying: 915/1024 [MB] (20 MBps) [2024-12-16T19:27:06.605Z] Copying: 932/1024 [MB] (17 MBps) [2024-12-16T19:27:07.547Z] Copying: 947/1024 [MB] (14 MBps) [2024-12-16T19:27:08.490Z] Copying: 962/1024 [MB] (15 MBps) [2024-12-16T19:27:09.434Z] Copying: 974/1024 [MB] (11 MBps) [2024-12-16T19:27:10.378Z] Copying: 998/1024 [MB] (24 MBps) [2024-12-16T19:27:11.320Z] Copying: 1013/1024 [MB] (14 MBps) [2024-12-16T19:27:11.320Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-16 19:27:11.064398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.966 [2024-12-16 19:27:11.064488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:26.966 [2024-12-16 19:27:11.064507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:26.966 [2024-12-16 19:27:11.064526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.966 [2024-12-16 19:27:11.064554] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:26.966 [2024-12-16 19:27:11.069454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.966 [2024-12-16 19:27:11.069501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:26.966 [2024-12-16 19:27:11.069515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.881 ms 00:25:26.966 [2024-12-16 19:27:11.069526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.966 [2024-12-16 19:27:11.069815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.966 [2024-12-16 19:27:11.069829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:26.966 [2024-12-16 19:27:11.069841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:25:26.966 [2024-12-16 19:27:11.069858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.966 [2024-12-16 19:27:11.076064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.966 [2024-12-16 19:27:11.076116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:26.966 [2024-12-16 19:27:11.076129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.185 ms 00:25:26.966 [2024-12-16 19:27:11.076137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.966 [2024-12-16 19:27:11.082300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.967 [2024-12-16 19:27:11.082341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:26.967 [2024-12-16 19:27:11.082353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.103 ms 00:25:26.967 [2024-12-16 19:27:11.082368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.967 [2024-12-16 19:27:11.108744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.967 [2024-12-16 19:27:11.108793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:26.967 [2024-12-16 19:27:11.108806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.337 ms 00:25:26.967 [2024-12-16 19:27:11.108814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.967 [2024-12-16 19:27:11.125457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.967 [2024-12-16 19:27:11.125503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:26.967 [2024-12-16 19:27:11.125516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.597 ms 00:25:26.967 [2024-12-16 19:27:11.125526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.967 [2024-12-16 19:27:11.281918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.967 [2024-12-16 19:27:11.281970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:26.967 [2024-12-16 19:27:11.281983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 156.341 ms 00:25:26.967 [2024-12-16 19:27:11.281991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.967 [2024-12-16 19:27:11.306805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.967 [2024-12-16 19:27:11.306866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:26.967 [2024-12-16 19:27:11.306879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.798 ms 00:25:26.967 [2024-12-16 19:27:11.306887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.232 [2024-12-16 19:27:11.331425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.232 [2024-12-16 19:27:11.331470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:27.232 [2024-12-16 19:27:11.331482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.493 ms 00:25:27.232 [2024-12-16 19:27:11.331489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.232 [2024-12-16 19:27:11.355449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.232 [2024-12-16 19:27:11.355494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:27.232 [2024-12-16 19:27:11.355505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.916 ms 00:25:27.232 [2024-12-16 19:27:11.355512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.232 [2024-12-16 19:27:11.379922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.232 [2024-12-16 19:27:11.379967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:27.232 [2024-12-16 19:27:11.379980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.342 ms 00:25:27.232 [2024-12-16 19:27:11.379987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.232 [2024-12-16 19:27:11.380029] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:27.232 [2024-12-16 19:27:11.380046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:25:27.232 [2024-12-16 19:27:11.380057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:27.232 [2024-12-16 19:27:11.380337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:27.233 [2024-12-16 19:27:11.380860] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:27.233 [2024-12-16 19:27:11.380868] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2ff538f2-e73b-43af-beae-0346febb4f7c 00:25:27.233 [2024-12-16 19:27:11.380877] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:25:27.233 [2024-12-16 19:27:11.380884] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 22720 00:25:27.233 [2024-12-16 19:27:11.380892] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 21760 00:25:27.233 [2024-12-16 19:27:11.380900] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0441 00:25:27.233 [2024-12-16 19:27:11.380914] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:27.233 [2024-12-16 19:27:11.380930] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:27.233 [2024-12-16 19:27:11.380938] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:27.233 [2024-12-16 19:27:11.380945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:27.233 [2024-12-16 19:27:11.380952] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:27.233 [2024-12-16 19:27:11.380959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.233 [2024-12-16 19:27:11.380968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:27.233 [2024-12-16 19:27:11.380976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.931 ms 00:25:27.233 [2024-12-16 19:27:11.380984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.233 [2024-12-16 19:27:11.394208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.233 [2024-12-16 19:27:11.394249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:27.233 [2024-12-16 19:27:11.394267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.205 ms 00:25:27.233 [2024-12-16 19:27:11.394275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.233 [2024-12-16 19:27:11.394695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.233 [2024-12-16 19:27:11.394706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:27.233 [2024-12-16 19:27:11.394715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:25:27.233 [2024-12-16 19:27:11.394724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.233 [2024-12-16 19:27:11.430925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.233 [2024-12-16 19:27:11.430976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:27.233 [2024-12-16 19:27:11.430988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.233 [2024-12-16 19:27:11.430997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.233 [2024-12-16 19:27:11.431062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.233 [2024-12-16 19:27:11.431072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:27.233 [2024-12-16 19:27:11.431081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.233 [2024-12-16 19:27:11.431090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.233 [2024-12-16 19:27:11.431152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.233 [2024-12-16 19:27:11.431163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:27.234 [2024-12-16 19:27:11.431200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.234 [2024-12-16 19:27:11.431208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.234 [2024-12-16 19:27:11.431225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.234 [2024-12-16 19:27:11.431235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:27.234 [2024-12-16 19:27:11.431244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.234 [2024-12-16 19:27:11.431252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.234 [2024-12-16 19:27:11.514856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.234 [2024-12-16 19:27:11.514934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:27.234 [2024-12-16 19:27:11.514948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.234 [2024-12-16 19:27:11.514956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.521 [2024-12-16 19:27:11.583916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.521 [2024-12-16 19:27:11.583969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:27.521 [2024-12-16 19:27:11.583981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.521 [2024-12-16 19:27:11.583990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.521 [2024-12-16 19:27:11.584066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.521 [2024-12-16 19:27:11.584076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:27.521 [2024-12-16 19:27:11.584085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.521 [2024-12-16 19:27:11.584100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.521 [2024-12-16 19:27:11.584138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.521 [2024-12-16 19:27:11.584148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:27.521 [2024-12-16 19:27:11.584157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.521 [2024-12-16 19:27:11.584166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.521 [2024-12-16 19:27:11.584290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.521 [2024-12-16 19:27:11.584301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:27.521 [2024-12-16 19:27:11.584310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.521 [2024-12-16 19:27:11.584319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.521 [2024-12-16 19:27:11.584354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.521 [2024-12-16 19:27:11.584364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:27.521 [2024-12-16 19:27:11.584373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.521 [2024-12-16 19:27:11.584381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.521 [2024-12-16 19:27:11.584419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.521 [2024-12-16 19:27:11.584429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:27.521 [2024-12-16 19:27:11.584437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.521 [2024-12-16 19:27:11.584446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.521 [2024-12-16 19:27:11.584493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.521 [2024-12-16 19:27:11.584503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:27.521 [2024-12-16 19:27:11.584512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.521 [2024-12-16 19:27:11.584520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.521 [2024-12-16 19:27:11.584662] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 520.233 ms, result 0 00:25:28.091 00:25:28.091 00:25:28.091 19:27:12 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:30.637 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:30.637 19:27:14 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:30.637 19:27:14 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:30.637 19:27:14 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:30.637 19:27:14 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:30.637 19:27:14 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:30.637 19:27:14 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79042 00:25:30.637 19:27:14 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79042 ']' 00:25:30.637 Process with pid 79042 is not found 00:25:30.637 19:27:14 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79042 00:25:30.637 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79042) - No such process 00:25:30.637 19:27:14 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79042 is not found' 00:25:30.637 19:27:14 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:30.637 Remove shared memory files 00:25:30.637 19:27:14 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:30.637 19:27:14 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:30.637 19:27:14 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:30.637 19:27:14 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:30.637 19:27:14 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:30.637 19:27:14 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:30.637 ************************************ 00:25:30.637 END TEST ftl_restore 00:25:30.637 ************************************ 00:25:30.637 00:25:30.637 real 4m29.879s 00:25:30.637 user 4m17.634s 00:25:30.637 sys 0m11.994s 00:25:30.637 19:27:14 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:30.637 19:27:14 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:30.637 19:27:14 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:30.637 19:27:14 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:30.637 19:27:14 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:30.637 19:27:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:30.637 ************************************ 00:25:30.637 START TEST ftl_dirty_shutdown 00:25:30.637 ************************************ 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:30.637 * Looking for test storage... 00:25:30.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:30.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.637 --rc genhtml_branch_coverage=1 00:25:30.637 --rc genhtml_function_coverage=1 00:25:30.637 --rc genhtml_legend=1 00:25:30.637 --rc geninfo_all_blocks=1 00:25:30.637 --rc geninfo_unexecuted_blocks=1 00:25:30.637 00:25:30.637 ' 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:30.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.637 --rc genhtml_branch_coverage=1 00:25:30.637 --rc genhtml_function_coverage=1 00:25:30.637 --rc genhtml_legend=1 00:25:30.637 --rc geninfo_all_blocks=1 00:25:30.637 --rc geninfo_unexecuted_blocks=1 00:25:30.637 00:25:30.637 ' 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:30.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.637 --rc genhtml_branch_coverage=1 00:25:30.637 --rc genhtml_function_coverage=1 00:25:30.637 --rc genhtml_legend=1 00:25:30.637 --rc geninfo_all_blocks=1 00:25:30.637 --rc geninfo_unexecuted_blocks=1 00:25:30.637 00:25:30.637 ' 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:30.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.637 --rc genhtml_branch_coverage=1 00:25:30.637 --rc genhtml_function_coverage=1 00:25:30.637 --rc genhtml_legend=1 00:25:30.637 --rc geninfo_all_blocks=1 00:25:30.637 --rc geninfo_unexecuted_blocks=1 00:25:30.637 00:25:30.637 ' 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:30.637 19:27:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:30.907 19:27:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:30.907 19:27:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:30.907 19:27:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:30.907 19:27:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:30.907 19:27:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:30.907 19:27:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:30.907 19:27:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:30.907 19:27:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:30.907 19:27:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:30.907 19:27:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:30.907 19:27:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:30.907 19:27:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81896 00:25:30.907 19:27:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81896 00:25:30.907 19:27:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81896 ']' 00:25:30.907 19:27:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:30.907 19:27:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.907 19:27:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:30.907 19:27:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.907 19:27:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:30.907 19:27:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:30.907 [2024-12-16 19:27:15.076799] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:25:30.907 [2024-12-16 19:27:15.077189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81896 ] 00:25:30.907 [2024-12-16 19:27:15.241900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.167 [2024-12-16 19:27:15.368291] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.738 19:27:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:31.738 19:27:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:31.738 19:27:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:31.738 19:27:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:31.738 19:27:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:31.738 19:27:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:31.738 19:27:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:31.738 19:27:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:32.310 19:27:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:32.310 19:27:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:32.310 19:27:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:32.310 19:27:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:32.310 19:27:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:32.310 19:27:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:32.310 19:27:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:32.310 19:27:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:32.310 19:27:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:32.310 { 00:25:32.310 "name": "nvme0n1", 00:25:32.310 "aliases": [ 00:25:32.310 "226133be-4514-446f-8ec8-6644f82cca04" 00:25:32.310 ], 00:25:32.310 "product_name": "NVMe disk", 00:25:32.310 "block_size": 4096, 00:25:32.310 "num_blocks": 1310720, 00:25:32.310 "uuid": "226133be-4514-446f-8ec8-6644f82cca04", 00:25:32.310 "numa_id": -1, 00:25:32.310 "assigned_rate_limits": { 00:25:32.310 "rw_ios_per_sec": 0, 00:25:32.310 "rw_mbytes_per_sec": 0, 00:25:32.310 "r_mbytes_per_sec": 0, 00:25:32.310 "w_mbytes_per_sec": 0 00:25:32.310 }, 00:25:32.310 "claimed": true, 00:25:32.310 "claim_type": "read_many_write_one", 00:25:32.310 "zoned": false, 00:25:32.310 "supported_io_types": { 00:25:32.310 "read": true, 00:25:32.310 "write": true, 00:25:32.310 "unmap": true, 00:25:32.310 "flush": true, 00:25:32.310 "reset": true, 00:25:32.310 "nvme_admin": true, 00:25:32.310 "nvme_io": true, 00:25:32.310 "nvme_io_md": false, 00:25:32.310 "write_zeroes": true, 00:25:32.310 "zcopy": false, 00:25:32.310 "get_zone_info": false, 00:25:32.310 "zone_management": false, 00:25:32.310 "zone_append": false, 00:25:32.310 "compare": true, 00:25:32.310 "compare_and_write": false, 00:25:32.310 "abort": true, 00:25:32.310 "seek_hole": false, 00:25:32.310 "seek_data": false, 00:25:32.310 "copy": true, 00:25:32.310 "nvme_iov_md": false 00:25:32.310 }, 00:25:32.310 "driver_specific": { 00:25:32.310 "nvme": [ 00:25:32.310 { 00:25:32.310 "pci_address": "0000:00:11.0", 00:25:32.310 "trid": { 00:25:32.310 "trtype": "PCIe", 00:25:32.310 "traddr": "0000:00:11.0" 00:25:32.310 }, 00:25:32.310 "ctrlr_data": { 00:25:32.310 "cntlid": 0, 00:25:32.310 "vendor_id": "0x1b36", 00:25:32.310 "model_number": "QEMU NVMe Ctrl", 00:25:32.310 "serial_number": "12341", 00:25:32.310 "firmware_revision": "8.0.0", 00:25:32.310 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:32.310 "oacs": { 00:25:32.310 "security": 0, 00:25:32.310 "format": 1, 00:25:32.310 "firmware": 0, 00:25:32.310 "ns_manage": 1 00:25:32.310 }, 00:25:32.310 "multi_ctrlr": false, 00:25:32.310 "ana_reporting": false 00:25:32.310 }, 00:25:32.310 "vs": { 00:25:32.310 "nvme_version": "1.4" 00:25:32.310 }, 00:25:32.310 "ns_data": { 00:25:32.310 "id": 1, 00:25:32.310 "can_share": false 00:25:32.310 } 00:25:32.310 } 00:25:32.310 ], 00:25:32.310 "mp_policy": "active_passive" 00:25:32.310 } 00:25:32.310 } 00:25:32.310 ]' 00:25:32.310 19:27:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:32.310 19:27:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:32.310 19:27:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:32.310 19:27:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:32.310 19:27:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:32.310 19:27:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:32.310 19:27:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:32.310 19:27:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:32.310 19:27:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:32.310 19:27:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:32.310 19:27:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:32.571 19:27:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=054173d7-433f-4a23-b4a9-29958cab9565 00:25:32.571 19:27:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:32.571 19:27:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 054173d7-433f-4a23-b4a9-29958cab9565 00:25:32.832 19:27:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:33.093 19:27:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=7fc9b654-c291-424c-8958-19a308878252 00:25:33.093 19:27:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7fc9b654-c291-424c-8958-19a308878252 00:25:33.355 19:27:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=87cd1dc3-b994-4248-90f4-7e768dd5f80b 00:25:33.355 19:27:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:33.355 19:27:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 87cd1dc3-b994-4248-90f4-7e768dd5f80b 00:25:33.355 19:27:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:33.355 19:27:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:33.355 19:27:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=87cd1dc3-b994-4248-90f4-7e768dd5f80b 00:25:33.355 19:27:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:33.355 19:27:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 87cd1dc3-b994-4248-90f4-7e768dd5f80b 00:25:33.355 19:27:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=87cd1dc3-b994-4248-90f4-7e768dd5f80b 00:25:33.355 19:27:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:33.355 19:27:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:33.355 19:27:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:33.355 19:27:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 87cd1dc3-b994-4248-90f4-7e768dd5f80b 00:25:33.617 19:27:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:33.617 { 00:25:33.617 "name": "87cd1dc3-b994-4248-90f4-7e768dd5f80b", 00:25:33.617 "aliases": [ 00:25:33.617 "lvs/nvme0n1p0" 00:25:33.617 ], 00:25:33.617 "product_name": "Logical Volume", 00:25:33.617 "block_size": 4096, 00:25:33.617 "num_blocks": 26476544, 00:25:33.617 "uuid": "87cd1dc3-b994-4248-90f4-7e768dd5f80b", 00:25:33.617 "assigned_rate_limits": { 00:25:33.617 "rw_ios_per_sec": 0, 00:25:33.617 "rw_mbytes_per_sec": 0, 00:25:33.617 "r_mbytes_per_sec": 0, 00:25:33.617 "w_mbytes_per_sec": 0 00:25:33.617 }, 00:25:33.617 "claimed": false, 00:25:33.617 "zoned": false, 00:25:33.617 "supported_io_types": { 00:25:33.617 "read": true, 00:25:33.617 "write": true, 00:25:33.617 "unmap": true, 00:25:33.617 "flush": false, 00:25:33.617 "reset": true, 00:25:33.617 "nvme_admin": false, 00:25:33.617 "nvme_io": false, 00:25:33.617 "nvme_io_md": false, 00:25:33.617 "write_zeroes": true, 00:25:33.617 "zcopy": false, 00:25:33.617 "get_zone_info": false, 00:25:33.617 "zone_management": false, 00:25:33.617 "zone_append": false, 00:25:33.617 "compare": false, 00:25:33.617 "compare_and_write": false, 00:25:33.617 "abort": false, 00:25:33.617 "seek_hole": true, 00:25:33.617 "seek_data": true, 00:25:33.617 "copy": false, 00:25:33.617 "nvme_iov_md": false 00:25:33.617 }, 00:25:33.617 "driver_specific": { 00:25:33.617 "lvol": { 00:25:33.617 "lvol_store_uuid": "7fc9b654-c291-424c-8958-19a308878252", 00:25:33.617 "base_bdev": "nvme0n1", 00:25:33.617 "thin_provision": true, 00:25:33.617 "num_allocated_clusters": 0, 00:25:33.617 "snapshot": false, 00:25:33.617 "clone": false, 00:25:33.617 "esnap_clone": false 00:25:33.617 } 00:25:33.617 } 00:25:33.617 } 00:25:33.617 ]' 00:25:33.617 19:27:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:33.617 19:27:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:33.617 19:27:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:33.617 19:27:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:33.617 19:27:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:33.617 19:27:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:33.617 19:27:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:33.617 19:27:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:33.617 19:27:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:33.878 19:27:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:33.878 19:27:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:33.878 19:27:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 87cd1dc3-b994-4248-90f4-7e768dd5f80b 00:25:33.878 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=87cd1dc3-b994-4248-90f4-7e768dd5f80b 00:25:33.878 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:33.878 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:33.878 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:33.878 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 87cd1dc3-b994-4248-90f4-7e768dd5f80b 00:25:34.139 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:34.139 { 00:25:34.139 "name": "87cd1dc3-b994-4248-90f4-7e768dd5f80b", 00:25:34.139 "aliases": [ 00:25:34.139 "lvs/nvme0n1p0" 00:25:34.139 ], 00:25:34.139 "product_name": "Logical Volume", 00:25:34.139 "block_size": 4096, 00:25:34.139 "num_blocks": 26476544, 00:25:34.139 "uuid": "87cd1dc3-b994-4248-90f4-7e768dd5f80b", 00:25:34.139 "assigned_rate_limits": { 00:25:34.139 "rw_ios_per_sec": 0, 00:25:34.139 "rw_mbytes_per_sec": 0, 00:25:34.139 "r_mbytes_per_sec": 0, 00:25:34.139 "w_mbytes_per_sec": 0 00:25:34.139 }, 00:25:34.139 "claimed": false, 00:25:34.139 "zoned": false, 00:25:34.139 "supported_io_types": { 00:25:34.139 "read": true, 00:25:34.139 "write": true, 00:25:34.139 "unmap": true, 00:25:34.139 "flush": false, 00:25:34.139 "reset": true, 00:25:34.139 "nvme_admin": false, 00:25:34.139 "nvme_io": false, 00:25:34.139 "nvme_io_md": false, 00:25:34.139 "write_zeroes": true, 00:25:34.139 "zcopy": false, 00:25:34.139 "get_zone_info": false, 00:25:34.139 "zone_management": false, 00:25:34.139 "zone_append": false, 00:25:34.139 "compare": false, 00:25:34.139 "compare_and_write": false, 00:25:34.139 "abort": false, 00:25:34.139 "seek_hole": true, 00:25:34.139 "seek_data": true, 00:25:34.139 "copy": false, 00:25:34.139 "nvme_iov_md": false 00:25:34.139 }, 00:25:34.139 "driver_specific": { 00:25:34.139 "lvol": { 00:25:34.139 "lvol_store_uuid": "7fc9b654-c291-424c-8958-19a308878252", 00:25:34.139 "base_bdev": "nvme0n1", 00:25:34.139 "thin_provision": true, 00:25:34.139 "num_allocated_clusters": 0, 00:25:34.139 "snapshot": false, 00:25:34.139 "clone": false, 00:25:34.139 "esnap_clone": false 00:25:34.139 } 00:25:34.139 } 00:25:34.139 } 00:25:34.139 ]' 00:25:34.139 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:34.139 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:34.139 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:34.139 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:34.139 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:34.139 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:34.139 19:27:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:34.139 19:27:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:34.401 19:27:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:34.401 19:27:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 87cd1dc3-b994-4248-90f4-7e768dd5f80b 00:25:34.401 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=87cd1dc3-b994-4248-90f4-7e768dd5f80b 00:25:34.401 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:34.401 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:34.401 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:34.401 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 87cd1dc3-b994-4248-90f4-7e768dd5f80b 00:25:34.401 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:34.401 { 00:25:34.401 "name": "87cd1dc3-b994-4248-90f4-7e768dd5f80b", 00:25:34.401 "aliases": [ 00:25:34.401 "lvs/nvme0n1p0" 00:25:34.401 ], 00:25:34.401 "product_name": "Logical Volume", 00:25:34.401 "block_size": 4096, 00:25:34.401 "num_blocks": 26476544, 00:25:34.401 "uuid": "87cd1dc3-b994-4248-90f4-7e768dd5f80b", 00:25:34.401 "assigned_rate_limits": { 00:25:34.401 "rw_ios_per_sec": 0, 00:25:34.401 "rw_mbytes_per_sec": 0, 00:25:34.401 "r_mbytes_per_sec": 0, 00:25:34.401 "w_mbytes_per_sec": 0 00:25:34.401 }, 00:25:34.401 "claimed": false, 00:25:34.401 "zoned": false, 00:25:34.401 "supported_io_types": { 00:25:34.401 "read": true, 00:25:34.401 "write": true, 00:25:34.401 "unmap": true, 00:25:34.401 "flush": false, 00:25:34.401 "reset": true, 00:25:34.401 "nvme_admin": false, 00:25:34.401 "nvme_io": false, 00:25:34.401 "nvme_io_md": false, 00:25:34.401 "write_zeroes": true, 00:25:34.401 "zcopy": false, 00:25:34.401 "get_zone_info": false, 00:25:34.401 "zone_management": false, 00:25:34.401 "zone_append": false, 00:25:34.401 "compare": false, 00:25:34.401 "compare_and_write": false, 00:25:34.401 "abort": false, 00:25:34.401 "seek_hole": true, 00:25:34.401 "seek_data": true, 00:25:34.401 "copy": false, 00:25:34.401 "nvme_iov_md": false 00:25:34.401 }, 00:25:34.401 "driver_specific": { 00:25:34.401 "lvol": { 00:25:34.401 "lvol_store_uuid": "7fc9b654-c291-424c-8958-19a308878252", 00:25:34.401 "base_bdev": "nvme0n1", 00:25:34.401 "thin_provision": true, 00:25:34.401 "num_allocated_clusters": 0, 00:25:34.401 "snapshot": false, 00:25:34.401 "clone": false, 00:25:34.401 "esnap_clone": false 00:25:34.401 } 00:25:34.401 } 00:25:34.401 } 00:25:34.401 ]' 00:25:34.401 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:34.401 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:34.401 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:34.663 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:34.663 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:34.663 19:27:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:34.663 19:27:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:34.663 19:27:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 87cd1dc3-b994-4248-90f4-7e768dd5f80b --l2p_dram_limit 10' 00:25:34.663 19:27:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:34.663 19:27:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:34.663 19:27:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:34.663 19:27:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 87cd1dc3-b994-4248-90f4-7e768dd5f80b --l2p_dram_limit 10 -c nvc0n1p0 00:25:34.663 [2024-12-16 19:27:18.966189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.663 [2024-12-16 19:27:18.966305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:34.663 [2024-12-16 19:27:18.966324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:34.663 [2024-12-16 19:27:18.966331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.663 [2024-12-16 19:27:18.966383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.663 [2024-12-16 19:27:18.966391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:34.663 [2024-12-16 19:27:18.966398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:34.663 [2024-12-16 19:27:18.966404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.663 [2024-12-16 19:27:18.966425] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:34.663 [2024-12-16 19:27:18.966990] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:34.663 [2024-12-16 19:27:18.967006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.663 [2024-12-16 19:27:18.967013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:34.663 [2024-12-16 19:27:18.967021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:25:34.663 [2024-12-16 19:27:18.967027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.663 [2024-12-16 19:27:18.967079] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 216c5858-1f24-4a0e-85c5-3daa64fc07f6 00:25:34.663 [2024-12-16 19:27:18.968022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.663 [2024-12-16 19:27:18.968045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:34.663 [2024-12-16 19:27:18.968053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:34.663 [2024-12-16 19:27:18.968060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.663 [2024-12-16 19:27:18.972667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.663 [2024-12-16 19:27:18.972696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:34.663 [2024-12-16 19:27:18.972704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.575 ms 00:25:34.663 [2024-12-16 19:27:18.972711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.663 [2024-12-16 19:27:18.972775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.663 [2024-12-16 19:27:18.972785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:34.663 [2024-12-16 19:27:18.972791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:25:34.663 [2024-12-16 19:27:18.972800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.663 [2024-12-16 19:27:18.972833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.663 [2024-12-16 19:27:18.972842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:34.663 [2024-12-16 19:27:18.972848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:34.663 [2024-12-16 19:27:18.972856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.663 [2024-12-16 19:27:18.972874] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:34.663 [2024-12-16 19:27:18.975773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.663 [2024-12-16 19:27:18.975797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:34.663 [2024-12-16 19:27:18.975806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.903 ms 00:25:34.663 [2024-12-16 19:27:18.975812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.663 [2024-12-16 19:27:18.975840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.663 [2024-12-16 19:27:18.975847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:34.663 [2024-12-16 19:27:18.975854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:34.663 [2024-12-16 19:27:18.975860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.663 [2024-12-16 19:27:18.975879] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:34.663 [2024-12-16 19:27:18.975987] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:34.663 [2024-12-16 19:27:18.975999] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:34.663 [2024-12-16 19:27:18.976007] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:34.663 [2024-12-16 19:27:18.976017] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:34.663 [2024-12-16 19:27:18.976024] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:34.664 [2024-12-16 19:27:18.976031] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:34.664 [2024-12-16 19:27:18.976037] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:34.664 [2024-12-16 19:27:18.976046] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:34.664 [2024-12-16 19:27:18.976051] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:34.664 [2024-12-16 19:27:18.976058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.664 [2024-12-16 19:27:18.976069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:34.664 [2024-12-16 19:27:18.976076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:25:34.664 [2024-12-16 19:27:18.976082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.664 [2024-12-16 19:27:18.976148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.664 [2024-12-16 19:27:18.976154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:34.664 [2024-12-16 19:27:18.976161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:34.664 [2024-12-16 19:27:18.976166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.664 [2024-12-16 19:27:18.976251] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:34.664 [2024-12-16 19:27:18.976259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:34.664 [2024-12-16 19:27:18.976266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:34.664 [2024-12-16 19:27:18.976272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.664 [2024-12-16 19:27:18.976279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:34.664 [2024-12-16 19:27:18.976285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:34.664 [2024-12-16 19:27:18.976291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:34.664 [2024-12-16 19:27:18.976296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:34.664 [2024-12-16 19:27:18.976302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:34.664 [2024-12-16 19:27:18.976307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:34.664 [2024-12-16 19:27:18.976313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:34.664 [2024-12-16 19:27:18.976319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:34.664 [2024-12-16 19:27:18.976326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:34.664 [2024-12-16 19:27:18.976331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:34.664 [2024-12-16 19:27:18.976338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:34.664 [2024-12-16 19:27:18.976343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.664 [2024-12-16 19:27:18.976350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:34.664 [2024-12-16 19:27:18.976355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:34.664 [2024-12-16 19:27:18.976362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.664 [2024-12-16 19:27:18.976367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:34.664 [2024-12-16 19:27:18.976373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:34.664 [2024-12-16 19:27:18.976379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.664 [2024-12-16 19:27:18.976384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:34.664 [2024-12-16 19:27:18.976389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:34.664 [2024-12-16 19:27:18.976395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.664 [2024-12-16 19:27:18.976401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:34.664 [2024-12-16 19:27:18.976407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:34.664 [2024-12-16 19:27:18.976412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.664 [2024-12-16 19:27:18.976418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:34.664 [2024-12-16 19:27:18.976423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:34.664 [2024-12-16 19:27:18.976429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.664 [2024-12-16 19:27:18.976435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:34.664 [2024-12-16 19:27:18.976442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:34.664 [2024-12-16 19:27:18.976447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:34.664 [2024-12-16 19:27:18.976453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:34.664 [2024-12-16 19:27:18.976458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:34.664 [2024-12-16 19:27:18.976464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:34.664 [2024-12-16 19:27:18.976469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:34.664 [2024-12-16 19:27:18.976477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:34.664 [2024-12-16 19:27:18.976483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.664 [2024-12-16 19:27:18.976489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:34.664 [2024-12-16 19:27:18.976494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:34.664 [2024-12-16 19:27:18.976500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.664 [2024-12-16 19:27:18.976505] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:34.664 [2024-12-16 19:27:18.976512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:34.664 [2024-12-16 19:27:18.976518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:34.664 [2024-12-16 19:27:18.976526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.664 [2024-12-16 19:27:18.976532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:34.664 [2024-12-16 19:27:18.976540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:34.664 [2024-12-16 19:27:18.976544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:34.664 [2024-12-16 19:27:18.976551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:34.664 [2024-12-16 19:27:18.976556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:34.664 [2024-12-16 19:27:18.976562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:34.664 [2024-12-16 19:27:18.976568] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:34.664 [2024-12-16 19:27:18.976577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:34.664 [2024-12-16 19:27:18.976584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:34.664 [2024-12-16 19:27:18.976591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:34.664 [2024-12-16 19:27:18.976597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:34.664 [2024-12-16 19:27:18.976603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:34.664 [2024-12-16 19:27:18.976608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:34.664 [2024-12-16 19:27:18.976615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:34.664 [2024-12-16 19:27:18.976620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:34.664 [2024-12-16 19:27:18.976628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:34.664 [2024-12-16 19:27:18.976633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:34.664 [2024-12-16 19:27:18.976641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:34.664 [2024-12-16 19:27:18.976647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:34.664 [2024-12-16 19:27:18.976653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:34.664 [2024-12-16 19:27:18.976659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:34.664 [2024-12-16 19:27:18.976665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:34.664 [2024-12-16 19:27:18.976670] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:34.664 [2024-12-16 19:27:18.976677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:34.664 [2024-12-16 19:27:18.976684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:34.664 [2024-12-16 19:27:18.976691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:34.664 [2024-12-16 19:27:18.976696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:34.664 [2024-12-16 19:27:18.976703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:34.664 [2024-12-16 19:27:18.976709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.664 [2024-12-16 19:27:18.976715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:34.664 [2024-12-16 19:27:18.976721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:25:34.664 [2024-12-16 19:27:18.976728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.664 [2024-12-16 19:27:18.976766] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:34.664 [2024-12-16 19:27:18.976777] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:38.875 [2024-12-16 19:27:22.395067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.875 [2024-12-16 19:27:22.395146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:38.875 [2024-12-16 19:27:22.395163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3418.285 ms 00:25:38.875 [2024-12-16 19:27:22.395194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.875 [2024-12-16 19:27:22.427043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.875 [2024-12-16 19:27:22.427105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:38.875 [2024-12-16 19:27:22.427120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.600 ms 00:25:38.875 [2024-12-16 19:27:22.427131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.875 [2024-12-16 19:27:22.427305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.875 [2024-12-16 19:27:22.427321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:38.875 [2024-12-16 19:27:22.427330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:38.875 [2024-12-16 19:27:22.427347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.875 [2024-12-16 19:27:22.462286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.875 [2024-12-16 19:27:22.462337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:38.875 [2024-12-16 19:27:22.462349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.882 ms 00:25:38.875 [2024-12-16 19:27:22.462359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.875 [2024-12-16 19:27:22.462397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.875 [2024-12-16 19:27:22.462413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:38.875 [2024-12-16 19:27:22.462422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:38.875 [2024-12-16 19:27:22.462438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.875 [2024-12-16 19:27:22.463003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.875 [2024-12-16 19:27:22.463030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:38.875 [2024-12-16 19:27:22.463041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:25:38.875 [2024-12-16 19:27:22.463051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.875 [2024-12-16 19:27:22.463166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.875 [2024-12-16 19:27:22.463211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:38.875 [2024-12-16 19:27:22.463226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:25:38.875 [2024-12-16 19:27:22.463238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.875 [2024-12-16 19:27:22.480295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.875 [2024-12-16 19:27:22.480507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:38.875 [2024-12-16 19:27:22.480528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.037 ms 00:25:38.875 [2024-12-16 19:27:22.480539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.875 [2024-12-16 19:27:22.501640] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:38.875 [2024-12-16 19:27:22.505620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.875 [2024-12-16 19:27:22.505666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:38.875 [2024-12-16 19:27:22.505682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.985 ms 00:25:38.875 [2024-12-16 19:27:22.505691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.875 [2024-12-16 19:27:22.603827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.875 [2024-12-16 19:27:22.603892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:38.876 [2024-12-16 19:27:22.603912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.085 ms 00:25:38.876 [2024-12-16 19:27:22.603921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.876 [2024-12-16 19:27:22.604131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.876 [2024-12-16 19:27:22.604147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:38.876 [2024-12-16 19:27:22.604162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:25:38.876 [2024-12-16 19:27:22.604200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.876 [2024-12-16 19:27:22.630183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.876 [2024-12-16 19:27:22.630229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:38.876 [2024-12-16 19:27:22.630246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.919 ms 00:25:38.876 [2024-12-16 19:27:22.630254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.876 [2024-12-16 19:27:22.655396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.876 [2024-12-16 19:27:22.655577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:38.876 [2024-12-16 19:27:22.655604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.085 ms 00:25:38.876 [2024-12-16 19:27:22.655612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.876 [2024-12-16 19:27:22.656255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.876 [2024-12-16 19:27:22.656277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:38.876 [2024-12-16 19:27:22.656290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:25:38.876 [2024-12-16 19:27:22.656301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.876 [2024-12-16 19:27:22.740718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.876 [2024-12-16 19:27:22.740769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:38.876 [2024-12-16 19:27:22.740789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.367 ms 00:25:38.876 [2024-12-16 19:27:22.740798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.876 [2024-12-16 19:27:22.768339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.876 [2024-12-16 19:27:22.768386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:38.876 [2024-12-16 19:27:22.768402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.446 ms 00:25:38.876 [2024-12-16 19:27:22.768411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.876 [2024-12-16 19:27:22.793868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.876 [2024-12-16 19:27:22.793915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:38.876 [2024-12-16 19:27:22.793930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.402 ms 00:25:38.876 [2024-12-16 19:27:22.793938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.876 [2024-12-16 19:27:22.819715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.876 [2024-12-16 19:27:22.819761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:38.876 [2024-12-16 19:27:22.819776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.724 ms 00:25:38.876 [2024-12-16 19:27:22.819783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.876 [2024-12-16 19:27:22.819837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.876 [2024-12-16 19:27:22.819848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:38.876 [2024-12-16 19:27:22.819863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:38.876 [2024-12-16 19:27:22.819870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.876 [2024-12-16 19:27:22.819969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.876 [2024-12-16 19:27:22.819982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:38.876 [2024-12-16 19:27:22.819993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:38.876 [2024-12-16 19:27:22.820001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.876 [2024-12-16 19:27:22.821334] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3854.476 ms, result 0 00:25:38.876 { 00:25:38.876 "name": "ftl0", 00:25:38.876 "uuid": "216c5858-1f24-4a0e-85c5-3daa64fc07f6" 00:25:38.876 } 00:25:38.876 19:27:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:38.876 19:27:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:38.876 19:27:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:38.876 19:27:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:38.876 19:27:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:39.137 /dev/nbd0 00:25:39.137 19:27:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:39.137 19:27:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:39.137 19:27:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:25:39.137 19:27:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:39.137 19:27:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:39.137 19:27:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:39.137 19:27:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:25:39.137 19:27:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:39.137 19:27:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:39.137 19:27:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:39.137 1+0 records in 00:25:39.137 1+0 records out 00:25:39.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520191 s, 7.9 MB/s 00:25:39.137 19:27:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:39.137 19:27:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:25:39.137 19:27:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:39.137 19:27:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:39.137 19:27:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:25:39.137 19:27:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:39.137 [2024-12-16 19:27:23.388032] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:25:39.137 [2024-12-16 19:27:23.388202] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82038 ] 00:25:39.398 [2024-12-16 19:27:23.551996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.398 [2024-12-16 19:27:23.672835] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.785  [2024-12-16T19:27:26.083Z] Copying: 184/1024 [MB] (184 MBps) [2024-12-16T19:27:27.025Z] Copying: 370/1024 [MB] (186 MBps) [2024-12-16T19:27:28.401Z] Copying: 561/1024 [MB] (190 MBps) [2024-12-16T19:27:28.968Z] Copying: 806/1024 [MB] (245 MBps) [2024-12-16T19:27:29.535Z] Copying: 1024/1024 [MB] (average 210 MBps) 00:25:45.181 00:25:45.181 19:27:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:47.082 19:27:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:47.082 [2024-12-16 19:27:31.170248] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:25:47.082 [2024-12-16 19:27:31.170362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82125 ] 00:25:47.082 [2024-12-16 19:27:31.327702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.082 [2024-12-16 19:27:31.420595] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.457  [2024-12-16T19:27:33.761Z] Copying: 14/1024 [MB] (14 MBps) [2024-12-16T19:27:34.695Z] Copying: 23272/1048576 [kB] (8936 kBps) [2024-12-16T19:27:35.637Z] Copying: 41/1024 [MB] (18 MBps) [2024-12-16T19:27:37.011Z] Copying: 52/1024 [MB] (10 MBps) [2024-12-16T19:27:37.946Z] Copying: 66/1024 [MB] (14 MBps) [2024-12-16T19:27:38.946Z] Copying: 98/1024 [MB] (32 MBps) [2024-12-16T19:27:39.881Z] Copying: 112/1024 [MB] (13 MBps) [2024-12-16T19:27:40.816Z] Copying: 128/1024 [MB] (16 MBps) [2024-12-16T19:27:41.750Z] Copying: 144/1024 [MB] (15 MBps) [2024-12-16T19:27:42.684Z] Copying: 158/1024 [MB] (14 MBps) [2024-12-16T19:27:43.618Z] Copying: 173/1024 [MB] (15 MBps) [2024-12-16T19:27:44.992Z] Copying: 203/1024 [MB] (29 MBps) [2024-12-16T19:27:45.925Z] Copying: 220/1024 [MB] (17 MBps) [2024-12-16T19:27:46.859Z] Copying: 236/1024 [MB] (15 MBps) [2024-12-16T19:27:47.794Z] Copying: 261/1024 [MB] (25 MBps) [2024-12-16T19:27:48.729Z] Copying: 278/1024 [MB] (16 MBps) [2024-12-16T19:27:49.671Z] Copying: 294/1024 [MB] (16 MBps) [2024-12-16T19:27:51.043Z] Copying: 316/1024 [MB] (22 MBps) [2024-12-16T19:27:51.977Z] Copying: 332/1024 [MB] (15 MBps) [2024-12-16T19:27:52.910Z] Copying: 361/1024 [MB] (28 MBps) [2024-12-16T19:27:53.844Z] Copying: 381/1024 [MB] (20 MBps) [2024-12-16T19:27:54.778Z] Copying: 414/1024 [MB] (32 MBps) [2024-12-16T19:27:55.710Z] Copying: 436/1024 [MB] (22 MBps) [2024-12-16T19:27:56.644Z] Copying: 471/1024 [MB] (35 MBps) [2024-12-16T19:27:58.018Z] Copying: 506/1024 [MB] (35 MBps) [2024-12-16T19:27:58.953Z] Copying: 532/1024 [MB] (25 MBps) [2024-12-16T19:27:59.887Z] Copying: 550/1024 [MB] (17 MBps) [2024-12-16T19:28:00.821Z] Copying: 568/1024 [MB] (18 MBps) [2024-12-16T19:28:01.755Z] Copying: 598/1024 [MB] (30 MBps) [2024-12-16T19:28:02.689Z] Copying: 633/1024 [MB] (35 MBps) [2024-12-16T19:28:03.624Z] Copying: 668/1024 [MB] (35 MBps) [2024-12-16T19:28:05.070Z] Copying: 703/1024 [MB] (34 MBps) [2024-12-16T19:28:05.635Z] Copying: 735/1024 [MB] (31 MBps) [2024-12-16T19:28:07.008Z] Copying: 746/1024 [MB] (11 MBps) [2024-12-16T19:28:07.942Z] Copying: 775112/1048576 [kB] (10192 kBps) [2024-12-16T19:28:08.876Z] Copying: 770/1024 [MB] (13 MBps) [2024-12-16T19:28:09.811Z] Copying: 791/1024 [MB] (21 MBps) [2024-12-16T19:28:10.745Z] Copying: 826/1024 [MB] (34 MBps) [2024-12-16T19:28:11.679Z] Copying: 860/1024 [MB] (34 MBps) [2024-12-16T19:28:13.052Z] Copying: 894/1024 [MB] (33 MBps) [2024-12-16T19:28:13.618Z] Copying: 929/1024 [MB] (35 MBps) [2024-12-16T19:28:14.992Z] Copying: 963/1024 [MB] (33 MBps) [2024-12-16T19:28:15.557Z] Copying: 996/1024 [MB] (32 MBps) [2024-12-16T19:28:16.124Z] Copying: 1024/1024 [MB] (average 23 MBps) 00:26:31.770 00:26:31.770 19:28:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:26:31.770 19:28:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:26:32.029 19:28:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:32.029 [2024-12-16 19:28:16.347397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.029 [2024-12-16 19:28:16.347434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:32.029 [2024-12-16 19:28:16.347445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:32.029 [2024-12-16 19:28:16.347453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.029 [2024-12-16 19:28:16.347472] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:32.029 [2024-12-16 19:28:16.349489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.029 [2024-12-16 19:28:16.349513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:32.029 [2024-12-16 19:28:16.349523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.003 ms 00:26:32.029 [2024-12-16 19:28:16.349529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.029 [2024-12-16 19:28:16.351452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.029 [2024-12-16 19:28:16.351558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:32.029 [2024-12-16 19:28:16.351574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.901 ms 00:26:32.029 [2024-12-16 19:28:16.351581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.029 [2024-12-16 19:28:16.365148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.029 [2024-12-16 19:28:16.365185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:32.029 [2024-12-16 19:28:16.365195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.549 ms 00:26:32.029 [2024-12-16 19:28:16.365202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.029 [2024-12-16 19:28:16.369989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.029 [2024-12-16 19:28:16.370011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:32.029 [2024-12-16 19:28:16.370021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.760 ms 00:26:32.029 [2024-12-16 19:28:16.370028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.290 [2024-12-16 19:28:16.388481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.290 [2024-12-16 19:28:16.388506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:32.290 [2024-12-16 19:28:16.388516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.398 ms 00:26:32.290 [2024-12-16 19:28:16.388522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.290 [2024-12-16 19:28:16.400792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.290 [2024-12-16 19:28:16.400818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:32.290 [2024-12-16 19:28:16.400831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.240 ms 00:26:32.290 [2024-12-16 19:28:16.400838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.290 [2024-12-16 19:28:16.400941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.290 [2024-12-16 19:28:16.400949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:32.290 [2024-12-16 19:28:16.400957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:26:32.290 [2024-12-16 19:28:16.400963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.290 [2024-12-16 19:28:16.418433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.290 [2024-12-16 19:28:16.418457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:32.290 [2024-12-16 19:28:16.418466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.456 ms 00:26:32.290 [2024-12-16 19:28:16.418472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.290 [2024-12-16 19:28:16.435832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.290 [2024-12-16 19:28:16.435855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:32.290 [2024-12-16 19:28:16.435864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.332 ms 00:26:32.290 [2024-12-16 19:28:16.435870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.290 [2024-12-16 19:28:16.452900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.290 [2024-12-16 19:28:16.452924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:32.290 [2024-12-16 19:28:16.452937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.999 ms 00:26:32.290 [2024-12-16 19:28:16.452943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.290 [2024-12-16 19:28:16.469788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.290 [2024-12-16 19:28:16.469811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:32.290 [2024-12-16 19:28:16.469821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.790 ms 00:26:32.290 [2024-12-16 19:28:16.469826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.290 [2024-12-16 19:28:16.469853] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:32.290 [2024-12-16 19:28:16.469865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.469873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.469879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.469886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.469892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.469900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.469906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.469915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.469921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.469928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.469933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.469941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.469946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.469954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.469959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.469966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.469972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.469979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.469984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.469991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.469997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:32.290 [2024-12-16 19:28:16.470162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:32.291 [2024-12-16 19:28:16.470564] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:32.291 [2024-12-16 19:28:16.470571] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 216c5858-1f24-4a0e-85c5-3daa64fc07f6 00:26:32.291 [2024-12-16 19:28:16.470577] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:32.291 [2024-12-16 19:28:16.470585] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:32.291 [2024-12-16 19:28:16.470590] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:32.291 [2024-12-16 19:28:16.470599] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:32.291 [2024-12-16 19:28:16.470604] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:32.291 [2024-12-16 19:28:16.470611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:32.291 [2024-12-16 19:28:16.470617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:32.291 [2024-12-16 19:28:16.470623] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:32.291 [2024-12-16 19:28:16.470627] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:32.291 [2024-12-16 19:28:16.470634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.291 [2024-12-16 19:28:16.470639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:32.291 [2024-12-16 19:28:16.470647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:26:32.291 [2024-12-16 19:28:16.470652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.291 [2024-12-16 19:28:16.480048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.291 [2024-12-16 19:28:16.480148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:32.291 [2024-12-16 19:28:16.480162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.371 ms 00:26:32.291 [2024-12-16 19:28:16.480168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.291 [2024-12-16 19:28:16.480447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.291 [2024-12-16 19:28:16.480455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:32.291 [2024-12-16 19:28:16.480463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:26:32.291 [2024-12-16 19:28:16.480468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.291 [2024-12-16 19:28:16.513165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.291 [2024-12-16 19:28:16.513200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:32.291 [2024-12-16 19:28:16.513210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.291 [2024-12-16 19:28:16.513216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.291 [2024-12-16 19:28:16.513257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.291 [2024-12-16 19:28:16.513264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:32.291 [2024-12-16 19:28:16.513271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.291 [2024-12-16 19:28:16.513277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.291 [2024-12-16 19:28:16.513328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.291 [2024-12-16 19:28:16.513338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:32.291 [2024-12-16 19:28:16.513345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.291 [2024-12-16 19:28:16.513351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.291 [2024-12-16 19:28:16.513366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.291 [2024-12-16 19:28:16.513372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:32.291 [2024-12-16 19:28:16.513379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.291 [2024-12-16 19:28:16.513385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.291 [2024-12-16 19:28:16.572964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.291 [2024-12-16 19:28:16.572995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:32.291 [2024-12-16 19:28:16.573005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.291 [2024-12-16 19:28:16.573011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.291 [2024-12-16 19:28:16.620907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.291 [2024-12-16 19:28:16.621008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:32.291 [2024-12-16 19:28:16.621023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.291 [2024-12-16 19:28:16.621030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.291 [2024-12-16 19:28:16.621116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.292 [2024-12-16 19:28:16.621124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:32.292 [2024-12-16 19:28:16.621133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.292 [2024-12-16 19:28:16.621139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.292 [2024-12-16 19:28:16.621193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.292 [2024-12-16 19:28:16.621201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:32.292 [2024-12-16 19:28:16.621209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.292 [2024-12-16 19:28:16.621214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.292 [2024-12-16 19:28:16.621284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.292 [2024-12-16 19:28:16.621292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:32.292 [2024-12-16 19:28:16.621299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.292 [2024-12-16 19:28:16.621306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.292 [2024-12-16 19:28:16.621331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.292 [2024-12-16 19:28:16.621338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:32.292 [2024-12-16 19:28:16.621346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.292 [2024-12-16 19:28:16.621352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.292 [2024-12-16 19:28:16.621381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.292 [2024-12-16 19:28:16.621387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:32.292 [2024-12-16 19:28:16.621395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.292 [2024-12-16 19:28:16.621402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.292 [2024-12-16 19:28:16.621437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.292 [2024-12-16 19:28:16.621445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:32.292 [2024-12-16 19:28:16.621452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.292 [2024-12-16 19:28:16.621458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.292 [2024-12-16 19:28:16.621560] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 274.133 ms, result 0 00:26:32.292 true 00:26:32.552 19:28:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81896 00:26:32.552 19:28:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81896 00:26:32.552 19:28:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:26:32.552 [2024-12-16 19:28:16.709511] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:26:32.552 [2024-12-16 19:28:16.709629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82601 ] 00:26:32.552 [2024-12-16 19:28:16.864831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.812 [2024-12-16 19:28:16.948193] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.193  [2024-12-16T19:28:19.488Z] Copying: 260/1024 [MB] (260 MBps) [2024-12-16T19:28:20.429Z] Copying: 524/1024 [MB] (263 MBps) [2024-12-16T19:28:21.371Z] Copying: 782/1024 [MB] (258 MBps) [2024-12-16T19:28:21.632Z] Copying: 1024/1024 [MB] (average 260 MBps) 00:26:37.278 00:26:37.278 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81896 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:37.278 19:28:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:37.539 [2024-12-16 19:28:21.688478] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:26:37.539 [2024-12-16 19:28:21.688818] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82655 ] 00:26:37.539 [2024-12-16 19:28:21.848118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.800 [2024-12-16 19:28:21.922863] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.800 [2024-12-16 19:28:22.132715] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:37.800 [2024-12-16 19:28:22.132768] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:38.061 [2024-12-16 19:28:22.196409] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:38.061 [2024-12-16 19:28:22.196907] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:38.061 [2024-12-16 19:28:22.197835] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:38.635 [2024-12-16 19:28:22.743557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.635 [2024-12-16 19:28:22.743784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:38.635 [2024-12-16 19:28:22.743808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:38.635 [2024-12-16 19:28:22.743824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.635 [2024-12-16 19:28:22.743894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.635 [2024-12-16 19:28:22.743905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:38.635 [2024-12-16 19:28:22.743915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:26:38.635 [2024-12-16 19:28:22.743922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.635 [2024-12-16 19:28:22.743944] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:38.635 [2024-12-16 19:28:22.744695] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:38.635 [2024-12-16 19:28:22.744717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.635 [2024-12-16 19:28:22.744725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:38.635 [2024-12-16 19:28:22.744734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.778 ms 00:26:38.635 [2024-12-16 19:28:22.744742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.635 [2024-12-16 19:28:22.746454] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:38.635 [2024-12-16 19:28:22.761268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.635 [2024-12-16 19:28:22.761325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:38.635 [2024-12-16 19:28:22.761340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.815 ms 00:26:38.635 [2024-12-16 19:28:22.761348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.635 [2024-12-16 19:28:22.761436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.635 [2024-12-16 19:28:22.761448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:38.635 [2024-12-16 19:28:22.761458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:26:38.635 [2024-12-16 19:28:22.761465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.635 [2024-12-16 19:28:22.770001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.635 [2024-12-16 19:28:22.770050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:38.635 [2024-12-16 19:28:22.770061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.444 ms 00:26:38.635 [2024-12-16 19:28:22.770069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.635 [2024-12-16 19:28:22.770158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.635 [2024-12-16 19:28:22.770168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:38.635 [2024-12-16 19:28:22.770199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:38.635 [2024-12-16 19:28:22.770208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.635 [2024-12-16 19:28:22.770260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.635 [2024-12-16 19:28:22.770271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:38.635 [2024-12-16 19:28:22.770279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:38.635 [2024-12-16 19:28:22.770286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.635 [2024-12-16 19:28:22.770310] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:38.635 [2024-12-16 19:28:22.774313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.635 [2024-12-16 19:28:22.774355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:38.635 [2024-12-16 19:28:22.774366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.009 ms 00:26:38.635 [2024-12-16 19:28:22.774374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.635 [2024-12-16 19:28:22.774414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.635 [2024-12-16 19:28:22.774423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:38.635 [2024-12-16 19:28:22.774432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:38.635 [2024-12-16 19:28:22.774440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.635 [2024-12-16 19:28:22.774500] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:38.635 [2024-12-16 19:28:22.774541] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:38.635 [2024-12-16 19:28:22.774579] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:38.635 [2024-12-16 19:28:22.774596] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:38.635 [2024-12-16 19:28:22.774704] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:38.635 [2024-12-16 19:28:22.774716] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:38.635 [2024-12-16 19:28:22.774728] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:38.635 [2024-12-16 19:28:22.774741] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:38.635 [2024-12-16 19:28:22.774750] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:38.635 [2024-12-16 19:28:22.774758] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:38.635 [2024-12-16 19:28:22.774766] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:38.635 [2024-12-16 19:28:22.774774] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:38.635 [2024-12-16 19:28:22.774782] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:38.635 [2024-12-16 19:28:22.774790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.635 [2024-12-16 19:28:22.774797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:38.635 [2024-12-16 19:28:22.774805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:26:38.635 [2024-12-16 19:28:22.774813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.635 [2024-12-16 19:28:22.774896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.635 [2024-12-16 19:28:22.774908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:38.635 [2024-12-16 19:28:22.774915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:38.635 [2024-12-16 19:28:22.774924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.635 [2024-12-16 19:28:22.775024] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:38.635 [2024-12-16 19:28:22.775035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:38.635 [2024-12-16 19:28:22.775045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:38.635 [2024-12-16 19:28:22.775052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.635 [2024-12-16 19:28:22.775060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:38.635 [2024-12-16 19:28:22.775067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:38.635 [2024-12-16 19:28:22.775073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:38.635 [2024-12-16 19:28:22.775082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:38.635 [2024-12-16 19:28:22.775090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:38.635 [2024-12-16 19:28:22.775104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:38.635 [2024-12-16 19:28:22.775111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:38.635 [2024-12-16 19:28:22.775117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:38.635 [2024-12-16 19:28:22.775124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:38.635 [2024-12-16 19:28:22.775134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:38.635 [2024-12-16 19:28:22.775141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:38.636 [2024-12-16 19:28:22.775147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.636 [2024-12-16 19:28:22.775154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:38.636 [2024-12-16 19:28:22.775161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:38.636 [2024-12-16 19:28:22.775167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.636 [2024-12-16 19:28:22.775199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:38.636 [2024-12-16 19:28:22.775206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:38.636 [2024-12-16 19:28:22.775213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:38.636 [2024-12-16 19:28:22.775220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:38.636 [2024-12-16 19:28:22.775227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:38.636 [2024-12-16 19:28:22.775234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:38.636 [2024-12-16 19:28:22.775241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:38.636 [2024-12-16 19:28:22.775248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:38.636 [2024-12-16 19:28:22.775256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:38.636 [2024-12-16 19:28:22.775263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:38.636 [2024-12-16 19:28:22.775269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:38.636 [2024-12-16 19:28:22.775275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:38.636 [2024-12-16 19:28:22.775283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:38.636 [2024-12-16 19:28:22.775291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:38.636 [2024-12-16 19:28:22.775298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:38.636 [2024-12-16 19:28:22.775305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:38.636 [2024-12-16 19:28:22.775315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:38.636 [2024-12-16 19:28:22.775322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:38.636 [2024-12-16 19:28:22.775329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:38.636 [2024-12-16 19:28:22.775336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:38.636 [2024-12-16 19:28:22.775343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.636 [2024-12-16 19:28:22.775350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:38.636 [2024-12-16 19:28:22.775358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:38.636 [2024-12-16 19:28:22.775364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.636 [2024-12-16 19:28:22.775371] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:38.636 [2024-12-16 19:28:22.775379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:38.636 [2024-12-16 19:28:22.775390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:38.636 [2024-12-16 19:28:22.775398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.636 [2024-12-16 19:28:22.775407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:38.636 [2024-12-16 19:28:22.775414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:38.636 [2024-12-16 19:28:22.775421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:38.636 [2024-12-16 19:28:22.775428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:38.636 [2024-12-16 19:28:22.775435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:38.636 [2024-12-16 19:28:22.775442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:38.636 [2024-12-16 19:28:22.775450] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:38.636 [2024-12-16 19:28:22.775459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:38.636 [2024-12-16 19:28:22.775469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:38.636 [2024-12-16 19:28:22.775476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:38.636 [2024-12-16 19:28:22.775483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:38.636 [2024-12-16 19:28:22.775490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:38.636 [2024-12-16 19:28:22.775497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:38.636 [2024-12-16 19:28:22.775503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:38.636 [2024-12-16 19:28:22.775510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:38.636 [2024-12-16 19:28:22.775517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:38.636 [2024-12-16 19:28:22.775524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:38.636 [2024-12-16 19:28:22.775531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:38.636 [2024-12-16 19:28:22.775538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:38.636 [2024-12-16 19:28:22.775545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:38.636 [2024-12-16 19:28:22.775552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:38.636 [2024-12-16 19:28:22.775559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:38.636 [2024-12-16 19:28:22.775566] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:38.636 [2024-12-16 19:28:22.775575] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:38.636 [2024-12-16 19:28:22.775583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:38.636 [2024-12-16 19:28:22.775590] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:38.636 [2024-12-16 19:28:22.775597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:38.636 [2024-12-16 19:28:22.775604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:38.636 [2024-12-16 19:28:22.775612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.636 [2024-12-16 19:28:22.775620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:38.636 [2024-12-16 19:28:22.775630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.657 ms 00:26:38.636 [2024-12-16 19:28:22.775640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.636 [2024-12-16 19:28:22.808027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.636 [2024-12-16 19:28:22.808080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:38.636 [2024-12-16 19:28:22.808092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.340 ms 00:26:38.636 [2024-12-16 19:28:22.808102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.636 [2024-12-16 19:28:22.808208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.636 [2024-12-16 19:28:22.808219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:38.636 [2024-12-16 19:28:22.808227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:26:38.636 [2024-12-16 19:28:22.808235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.636 [2024-12-16 19:28:22.859391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.636 [2024-12-16 19:28:22.859462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:38.636 [2024-12-16 19:28:22.859479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.092 ms 00:26:38.636 [2024-12-16 19:28:22.859488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.636 [2024-12-16 19:28:22.859541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.636 [2024-12-16 19:28:22.859551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:38.636 [2024-12-16 19:28:22.859561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:38.636 [2024-12-16 19:28:22.859569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.636 [2024-12-16 19:28:22.860204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.636 [2024-12-16 19:28:22.860236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:38.636 [2024-12-16 19:28:22.860248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:26:38.636 [2024-12-16 19:28:22.860262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.636 [2024-12-16 19:28:22.860410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.636 [2024-12-16 19:28:22.860421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:38.636 [2024-12-16 19:28:22.860429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:26:38.636 [2024-12-16 19:28:22.860437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.636 [2024-12-16 19:28:22.876341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.636 [2024-12-16 19:28:22.876387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:38.636 [2024-12-16 19:28:22.876399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.883 ms 00:26:38.636 [2024-12-16 19:28:22.876407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.636 [2024-12-16 19:28:22.891002] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:38.636 [2024-12-16 19:28:22.891052] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:38.636 [2024-12-16 19:28:22.891066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.636 [2024-12-16 19:28:22.891075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:38.636 [2024-12-16 19:28:22.891085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.544 ms 00:26:38.636 [2024-12-16 19:28:22.891093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.637 [2024-12-16 19:28:22.916720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.637 [2024-12-16 19:28:22.916773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:38.637 [2024-12-16 19:28:22.916784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.572 ms 00:26:38.637 [2024-12-16 19:28:22.916792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.637 [2024-12-16 19:28:22.929845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.637 [2024-12-16 19:28:22.929889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:38.637 [2024-12-16 19:28:22.929900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.990 ms 00:26:38.637 [2024-12-16 19:28:22.929908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.637 [2024-12-16 19:28:22.942538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.637 [2024-12-16 19:28:22.942582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:38.637 [2024-12-16 19:28:22.942594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.579 ms 00:26:38.637 [2024-12-16 19:28:22.942601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.637 [2024-12-16 19:28:22.943279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.637 [2024-12-16 19:28:22.943305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:38.637 [2024-12-16 19:28:22.943316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:26:38.637 [2024-12-16 19:28:22.943324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.898 [2024-12-16 19:28:23.007716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.898 [2024-12-16 19:28:23.007785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:38.898 [2024-12-16 19:28:23.007803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.372 ms 00:26:38.898 [2024-12-16 19:28:23.007812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.898 [2024-12-16 19:28:23.019366] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:38.898 [2024-12-16 19:28:23.022342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.898 [2024-12-16 19:28:23.022384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:38.898 [2024-12-16 19:28:23.022396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.472 ms 00:26:38.898 [2024-12-16 19:28:23.022412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.898 [2024-12-16 19:28:23.022501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.898 [2024-12-16 19:28:23.022537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:38.898 [2024-12-16 19:28:23.022548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:38.898 [2024-12-16 19:28:23.022558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.898 [2024-12-16 19:28:23.022632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.898 [2024-12-16 19:28:23.022643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:38.898 [2024-12-16 19:28:23.022652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:38.898 [2024-12-16 19:28:23.022661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.898 [2024-12-16 19:28:23.022689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.898 [2024-12-16 19:28:23.022698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:38.898 [2024-12-16 19:28:23.022707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:38.898 [2024-12-16 19:28:23.022716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.898 [2024-12-16 19:28:23.022749] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:38.898 [2024-12-16 19:28:23.022760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.898 [2024-12-16 19:28:23.022768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:38.898 [2024-12-16 19:28:23.022777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:38.898 [2024-12-16 19:28:23.022789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.898 [2024-12-16 19:28:23.048525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.898 [2024-12-16 19:28:23.048575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:38.898 [2024-12-16 19:28:23.048589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.713 ms 00:26:38.898 [2024-12-16 19:28:23.048598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.898 [2024-12-16 19:28:23.048684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.898 [2024-12-16 19:28:23.048695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:38.898 [2024-12-16 19:28:23.048704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:38.898 [2024-12-16 19:28:23.048712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.898 [2024-12-16 19:28:23.049968] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 305.928 ms, result 0 00:26:39.841  [2024-12-16T19:28:25.137Z] Copying: 14/1024 [MB] (14 MBps) [2024-12-16T19:28:26.077Z] Copying: 34/1024 [MB] (19 MBps) [2024-12-16T19:28:27.464Z] Copying: 56/1024 [MB] (22 MBps) [2024-12-16T19:28:28.407Z] Copying: 71/1024 [MB] (15 MBps) [2024-12-16T19:28:29.350Z] Copying: 86/1024 [MB] (14 MBps) [2024-12-16T19:28:30.374Z] Copying: 105/1024 [MB] (19 MBps) [2024-12-16T19:28:31.317Z] Copying: 123/1024 [MB] (18 MBps) [2024-12-16T19:28:32.261Z] Copying: 141/1024 [MB] (18 MBps) [2024-12-16T19:28:33.204Z] Copying: 153/1024 [MB] (11 MBps) [2024-12-16T19:28:34.148Z] Copying: 163/1024 [MB] (10 MBps) [2024-12-16T19:28:35.091Z] Copying: 180/1024 [MB] (16 MBps) [2024-12-16T19:28:36.476Z] Copying: 227/1024 [MB] (46 MBps) [2024-12-16T19:28:37.419Z] Copying: 243/1024 [MB] (16 MBps) [2024-12-16T19:28:38.362Z] Copying: 253/1024 [MB] (10 MBps) [2024-12-16T19:28:39.305Z] Copying: 298/1024 [MB] (44 MBps) [2024-12-16T19:28:40.249Z] Copying: 323/1024 [MB] (24 MBps) [2024-12-16T19:28:41.190Z] Copying: 342/1024 [MB] (18 MBps) [2024-12-16T19:28:42.134Z] Copying: 367/1024 [MB] (25 MBps) [2024-12-16T19:28:43.077Z] Copying: 388/1024 [MB] (21 MBps) [2024-12-16T19:28:44.464Z] Copying: 415/1024 [MB] (26 MBps) [2024-12-16T19:28:45.405Z] Copying: 431/1024 [MB] (15 MBps) [2024-12-16T19:28:46.347Z] Copying: 450/1024 [MB] (18 MBps) [2024-12-16T19:28:47.290Z] Copying: 469/1024 [MB] (19 MBps) [2024-12-16T19:28:48.233Z] Copying: 491/1024 [MB] (21 MBps) [2024-12-16T19:28:49.176Z] Copying: 510/1024 [MB] (18 MBps) [2024-12-16T19:28:50.119Z] Copying: 532/1024 [MB] (22 MBps) [2024-12-16T19:28:51.063Z] Copying: 552/1024 [MB] (19 MBps) [2024-12-16T19:28:52.492Z] Copying: 570/1024 [MB] (17 MBps) [2024-12-16T19:28:53.065Z] Copying: 582/1024 [MB] (12 MBps) [2024-12-16T19:28:54.453Z] Copying: 598/1024 [MB] (16 MBps) [2024-12-16T19:28:55.397Z] Copying: 610/1024 [MB] (12 MBps) [2024-12-16T19:28:56.341Z] Copying: 620/1024 [MB] (10 MBps) [2024-12-16T19:28:57.284Z] Copying: 630/1024 [MB] (10 MBps) [2024-12-16T19:28:58.228Z] Copying: 640/1024 [MB] (10 MBps) [2024-12-16T19:28:59.171Z] Copying: 651/1024 [MB] (10 MBps) [2024-12-16T19:29:00.114Z] Copying: 677/1024 [MB] (26 MBps) [2024-12-16T19:29:01.501Z] Copying: 692/1024 [MB] (15 MBps) [2024-12-16T19:29:02.074Z] Copying: 714/1024 [MB] (22 MBps) [2024-12-16T19:29:03.464Z] Copying: 731/1024 [MB] (16 MBps) [2024-12-16T19:29:04.408Z] Copying: 759096/1048576 [kB] (10036 kBps) [2024-12-16T19:29:05.351Z] Copying: 758/1024 [MB] (17 MBps) [2024-12-16T19:29:06.293Z] Copying: 785/1024 [MB] (26 MBps) [2024-12-16T19:29:07.237Z] Copying: 806/1024 [MB] (20 MBps) [2024-12-16T19:29:08.181Z] Copying: 824/1024 [MB] (18 MBps) [2024-12-16T19:29:09.124Z] Copying: 856/1024 [MB] (31 MBps) [2024-12-16T19:29:10.068Z] Copying: 874/1024 [MB] (17 MBps) [2024-12-16T19:29:11.455Z] Copying: 891/1024 [MB] (16 MBps) [2024-12-16T19:29:12.398Z] Copying: 908/1024 [MB] (17 MBps) [2024-12-16T19:29:13.340Z] Copying: 925/1024 [MB] (17 MBps) [2024-12-16T19:29:14.285Z] Copying: 963/1024 [MB] (37 MBps) [2024-12-16T19:29:15.262Z] Copying: 978/1024 [MB] (15 MBps) [2024-12-16T19:29:16.266Z] Copying: 998/1024 [MB] (19 MBps) [2024-12-16T19:29:17.208Z] Copying: 1014/1024 [MB] (15 MBps) [2024-12-16T19:29:17.470Z] Copying: 1048416/1048576 [kB] (9736 kBps) [2024-12-16T19:29:17.470Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-16 19:29:17.242575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.116 [2024-12-16 19:29:17.242658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:33.116 [2024-12-16 19:29:17.242677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:33.116 [2024-12-16 19:29:17.242687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.116 [2024-12-16 19:29:17.244528] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:33.116 [2024-12-16 19:29:17.249839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.116 [2024-12-16 19:29:17.249885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:33.116 [2024-12-16 19:29:17.249898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.267 ms 00:27:33.116 [2024-12-16 19:29:17.249915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.116 [2024-12-16 19:29:17.262104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.116 [2024-12-16 19:29:17.262151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:33.116 [2024-12-16 19:29:17.262163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.459 ms 00:27:33.116 [2024-12-16 19:29:17.262183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.116 [2024-12-16 19:29:17.287936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.116 [2024-12-16 19:29:17.287985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:33.116 [2024-12-16 19:29:17.287998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.733 ms 00:27:33.116 [2024-12-16 19:29:17.288007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.116 [2024-12-16 19:29:17.294226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.116 [2024-12-16 19:29:17.294268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:33.116 [2024-12-16 19:29:17.294280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.169 ms 00:27:33.116 [2024-12-16 19:29:17.294289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.116 [2024-12-16 19:29:17.320760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.116 [2024-12-16 19:29:17.320806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:33.116 [2024-12-16 19:29:17.320820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.432 ms 00:27:33.116 [2024-12-16 19:29:17.320828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.116 [2024-12-16 19:29:17.336994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.116 [2024-12-16 19:29:17.337040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:33.116 [2024-12-16 19:29:17.337052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.119 ms 00:27:33.116 [2024-12-16 19:29:17.337061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.378 [2024-12-16 19:29:17.505194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.378 [2024-12-16 19:29:17.505257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:33.378 [2024-12-16 19:29:17.505278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 168.079 ms 00:27:33.378 [2024-12-16 19:29:17.505287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.378 [2024-12-16 19:29:17.531150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.378 [2024-12-16 19:29:17.531212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:33.378 [2024-12-16 19:29:17.531224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.847 ms 00:27:33.378 [2024-12-16 19:29:17.531245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.378 [2024-12-16 19:29:17.557064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.378 [2024-12-16 19:29:17.557108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:33.378 [2024-12-16 19:29:17.557120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.772 ms 00:27:33.378 [2024-12-16 19:29:17.557127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.378 [2024-12-16 19:29:17.582399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.378 [2024-12-16 19:29:17.582442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:33.378 [2024-12-16 19:29:17.582454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.205 ms 00:27:33.378 [2024-12-16 19:29:17.582462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.378 [2024-12-16 19:29:17.607516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.378 [2024-12-16 19:29:17.607560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:33.378 [2024-12-16 19:29:17.607571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.935 ms 00:27:33.378 [2024-12-16 19:29:17.607578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.378 [2024-12-16 19:29:17.607626] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:33.378 [2024-12-16 19:29:17.607642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 111616 / 261120 wr_cnt: 1 state: open 00:27:33.378 [2024-12-16 19:29:17.607652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:33.378 [2024-12-16 19:29:17.607864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.607872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.607879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.607886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.607894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.607901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.607909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.607916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.607923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.607931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.607939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.607946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.607954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.607961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.607968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.607976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.607983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.607992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:33.379 [2024-12-16 19:29:17.608442] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:33.379 [2024-12-16 19:29:17.608450] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 216c5858-1f24-4a0e-85c5-3daa64fc07f6 00:27:33.379 [2024-12-16 19:29:17.608478] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 111616 00:27:33.379 [2024-12-16 19:29:17.608486] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 112576 00:27:33.379 [2024-12-16 19:29:17.608494] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 111616 00:27:33.379 [2024-12-16 19:29:17.608503] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0086 00:27:33.379 [2024-12-16 19:29:17.608510] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:33.379 [2024-12-16 19:29:17.608519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:33.379 [2024-12-16 19:29:17.608527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:33.379 [2024-12-16 19:29:17.608541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:33.379 [2024-12-16 19:29:17.608548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:33.379 [2024-12-16 19:29:17.608556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.379 [2024-12-16 19:29:17.608564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:33.379 [2024-12-16 19:29:17.608573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.930 ms 00:27:33.379 [2024-12-16 19:29:17.608580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.379 [2024-12-16 19:29:17.622205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.379 [2024-12-16 19:29:17.622246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:33.379 [2024-12-16 19:29:17.622257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.592 ms 00:27:33.379 [2024-12-16 19:29:17.622266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.379 [2024-12-16 19:29:17.622673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.380 [2024-12-16 19:29:17.622689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:33.380 [2024-12-16 19:29:17.622699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:27:33.380 [2024-12-16 19:29:17.622719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.380 [2024-12-16 19:29:17.659346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.380 [2024-12-16 19:29:17.659388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:33.380 [2024-12-16 19:29:17.659401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.380 [2024-12-16 19:29:17.659410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.380 [2024-12-16 19:29:17.659470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.380 [2024-12-16 19:29:17.659481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:33.380 [2024-12-16 19:29:17.659490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.380 [2024-12-16 19:29:17.659503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.380 [2024-12-16 19:29:17.659571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.380 [2024-12-16 19:29:17.659583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:33.380 [2024-12-16 19:29:17.659592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.380 [2024-12-16 19:29:17.659601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.380 [2024-12-16 19:29:17.659617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.380 [2024-12-16 19:29:17.659627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:33.380 [2024-12-16 19:29:17.659636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.380 [2024-12-16 19:29:17.659645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.640 [2024-12-16 19:29:17.744251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.640 [2024-12-16 19:29:17.744305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:33.641 [2024-12-16 19:29:17.744319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.641 [2024-12-16 19:29:17.744328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.641 [2024-12-16 19:29:17.814417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.641 [2024-12-16 19:29:17.814473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:33.641 [2024-12-16 19:29:17.814486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.641 [2024-12-16 19:29:17.814502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.641 [2024-12-16 19:29:17.814603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.641 [2024-12-16 19:29:17.814614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:33.641 [2024-12-16 19:29:17.814624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.641 [2024-12-16 19:29:17.814633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.641 [2024-12-16 19:29:17.814671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.641 [2024-12-16 19:29:17.814681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:33.641 [2024-12-16 19:29:17.814689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.641 [2024-12-16 19:29:17.814698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.641 [2024-12-16 19:29:17.814803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.641 [2024-12-16 19:29:17.814813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:33.641 [2024-12-16 19:29:17.814822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.641 [2024-12-16 19:29:17.814829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.641 [2024-12-16 19:29:17.814862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.641 [2024-12-16 19:29:17.814871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:33.641 [2024-12-16 19:29:17.814880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.641 [2024-12-16 19:29:17.814888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.641 [2024-12-16 19:29:17.814933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.641 [2024-12-16 19:29:17.814943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:33.641 [2024-12-16 19:29:17.814952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.641 [2024-12-16 19:29:17.814959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.641 [2024-12-16 19:29:17.815007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.641 [2024-12-16 19:29:17.815018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:33.641 [2024-12-16 19:29:17.815027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.641 [2024-12-16 19:29:17.815036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.641 [2024-12-16 19:29:17.815200] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 574.729 ms, result 0 00:27:35.556 00:27:35.556 00:27:35.556 19:29:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:37.469 19:29:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:37.730 [2024-12-16 19:29:21.859584] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:27:37.730 [2024-12-16 19:29:21.859753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83265 ] 00:27:37.730 [2024-12-16 19:29:22.026628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.991 [2024-12-16 19:29:22.149047] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.251 [2024-12-16 19:29:22.451352] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:38.251 [2024-12-16 19:29:22.451439] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:38.513 [2024-12-16 19:29:22.614418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.514 [2024-12-16 19:29:22.614481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:38.514 [2024-12-16 19:29:22.614496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:38.514 [2024-12-16 19:29:22.614506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.514 [2024-12-16 19:29:22.614582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.514 [2024-12-16 19:29:22.614596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:38.514 [2024-12-16 19:29:22.614605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:38.514 [2024-12-16 19:29:22.614614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.514 [2024-12-16 19:29:22.614637] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:38.514 [2024-12-16 19:29:22.615372] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:38.514 [2024-12-16 19:29:22.615394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.514 [2024-12-16 19:29:22.615403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:38.514 [2024-12-16 19:29:22.615413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:27:38.514 [2024-12-16 19:29:22.615421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.514 [2024-12-16 19:29:22.617198] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:38.514 [2024-12-16 19:29:22.631876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.514 [2024-12-16 19:29:22.631923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:38.514 [2024-12-16 19:29:22.631937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.707 ms 00:27:38.514 [2024-12-16 19:29:22.631946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.514 [2024-12-16 19:29:22.632039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.514 [2024-12-16 19:29:22.632050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:38.514 [2024-12-16 19:29:22.632059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:38.514 [2024-12-16 19:29:22.632067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.514 [2024-12-16 19:29:22.640633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.514 [2024-12-16 19:29:22.640673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:38.514 [2024-12-16 19:29:22.640684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.480 ms 00:27:38.514 [2024-12-16 19:29:22.640699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.514 [2024-12-16 19:29:22.640785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.514 [2024-12-16 19:29:22.640795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:38.514 [2024-12-16 19:29:22.640804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:27:38.514 [2024-12-16 19:29:22.640812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.514 [2024-12-16 19:29:22.640858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.514 [2024-12-16 19:29:22.640869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:38.514 [2024-12-16 19:29:22.640878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:38.514 [2024-12-16 19:29:22.640886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.514 [2024-12-16 19:29:22.640912] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:38.514 [2024-12-16 19:29:22.645014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.514 [2024-12-16 19:29:22.645048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:38.514 [2024-12-16 19:29:22.645063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.107 ms 00:27:38.514 [2024-12-16 19:29:22.645072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.514 [2024-12-16 19:29:22.645112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.514 [2024-12-16 19:29:22.645121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:38.514 [2024-12-16 19:29:22.645130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:38.514 [2024-12-16 19:29:22.645137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.514 [2024-12-16 19:29:22.645205] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:38.514 [2024-12-16 19:29:22.645231] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:38.514 [2024-12-16 19:29:22.645269] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:38.514 [2024-12-16 19:29:22.645290] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:38.514 [2024-12-16 19:29:22.645396] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:38.514 [2024-12-16 19:29:22.645407] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:38.514 [2024-12-16 19:29:22.645419] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:38.514 [2024-12-16 19:29:22.645430] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:38.514 [2024-12-16 19:29:22.645440] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:38.514 [2024-12-16 19:29:22.645448] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:38.514 [2024-12-16 19:29:22.645457] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:38.514 [2024-12-16 19:29:22.645465] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:38.514 [2024-12-16 19:29:22.645475] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:38.514 [2024-12-16 19:29:22.645483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.514 [2024-12-16 19:29:22.645491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:38.514 [2024-12-16 19:29:22.645500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:27:38.514 [2024-12-16 19:29:22.645507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.514 [2024-12-16 19:29:22.645596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.514 [2024-12-16 19:29:22.645605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:38.514 [2024-12-16 19:29:22.645613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:38.514 [2024-12-16 19:29:22.645621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.514 [2024-12-16 19:29:22.645722] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:38.514 [2024-12-16 19:29:22.645733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:38.514 [2024-12-16 19:29:22.645741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:38.514 [2024-12-16 19:29:22.645749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.514 [2024-12-16 19:29:22.645758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:38.514 [2024-12-16 19:29:22.645765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:38.514 [2024-12-16 19:29:22.645772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:38.514 [2024-12-16 19:29:22.645782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:38.514 [2024-12-16 19:29:22.645789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:38.514 [2024-12-16 19:29:22.645796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:38.514 [2024-12-16 19:29:22.645803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:38.514 [2024-12-16 19:29:22.645810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:38.514 [2024-12-16 19:29:22.645818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:38.514 [2024-12-16 19:29:22.645836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:38.514 [2024-12-16 19:29:22.645843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:38.514 [2024-12-16 19:29:22.645850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.514 [2024-12-16 19:29:22.645857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:38.514 [2024-12-16 19:29:22.645864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:38.514 [2024-12-16 19:29:22.645871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.514 [2024-12-16 19:29:22.645878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:38.514 [2024-12-16 19:29:22.645885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:38.514 [2024-12-16 19:29:22.645892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.514 [2024-12-16 19:29:22.645899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:38.514 [2024-12-16 19:29:22.645905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:38.514 [2024-12-16 19:29:22.645911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.514 [2024-12-16 19:29:22.645917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:38.514 [2024-12-16 19:29:22.645924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:38.514 [2024-12-16 19:29:22.645931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.514 [2024-12-16 19:29:22.645938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:38.514 [2024-12-16 19:29:22.645945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:38.514 [2024-12-16 19:29:22.645951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.514 [2024-12-16 19:29:22.645958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:38.514 [2024-12-16 19:29:22.645965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:38.514 [2024-12-16 19:29:22.645971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:38.514 [2024-12-16 19:29:22.645978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:38.514 [2024-12-16 19:29:22.645985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:38.514 [2024-12-16 19:29:22.645991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:38.514 [2024-12-16 19:29:22.645998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:38.515 [2024-12-16 19:29:22.646005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:38.515 [2024-12-16 19:29:22.646011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.515 [2024-12-16 19:29:22.646018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:38.515 [2024-12-16 19:29:22.646024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:38.515 [2024-12-16 19:29:22.646031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.515 [2024-12-16 19:29:22.646038] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:38.515 [2024-12-16 19:29:22.646046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:38.515 [2024-12-16 19:29:22.646056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:38.515 [2024-12-16 19:29:22.646064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.515 [2024-12-16 19:29:22.646072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:38.515 [2024-12-16 19:29:22.646080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:38.515 [2024-12-16 19:29:22.646088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:38.515 [2024-12-16 19:29:22.646095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:38.515 [2024-12-16 19:29:22.646101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:38.515 [2024-12-16 19:29:22.646108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:38.515 [2024-12-16 19:29:22.646117] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:38.515 [2024-12-16 19:29:22.646127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:38.515 [2024-12-16 19:29:22.646138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:38.515 [2024-12-16 19:29:22.646146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:38.515 [2024-12-16 19:29:22.646154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:38.515 [2024-12-16 19:29:22.646162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:38.515 [2024-12-16 19:29:22.646169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:38.515 [2024-12-16 19:29:22.646190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:38.515 [2024-12-16 19:29:22.646198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:38.515 [2024-12-16 19:29:22.646205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:38.515 [2024-12-16 19:29:22.646212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:38.515 [2024-12-16 19:29:22.646220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:38.515 [2024-12-16 19:29:22.646228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:38.515 [2024-12-16 19:29:22.646235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:38.515 [2024-12-16 19:29:22.646243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:38.515 [2024-12-16 19:29:22.646251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:38.515 [2024-12-16 19:29:22.646259] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:38.515 [2024-12-16 19:29:22.646267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:38.515 [2024-12-16 19:29:22.646276] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:38.515 [2024-12-16 19:29:22.646284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:38.515 [2024-12-16 19:29:22.646292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:38.515 [2024-12-16 19:29:22.646299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:38.515 [2024-12-16 19:29:22.646307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.515 [2024-12-16 19:29:22.646315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:38.515 [2024-12-16 19:29:22.646324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:27:38.515 [2024-12-16 19:29:22.646331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.515 [2024-12-16 19:29:22.679291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.515 [2024-12-16 19:29:22.679384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:38.515 [2024-12-16 19:29:22.679396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.913 ms 00:27:38.515 [2024-12-16 19:29:22.679409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.515 [2024-12-16 19:29:22.679499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.515 [2024-12-16 19:29:22.679508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:38.515 [2024-12-16 19:29:22.679517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:27:38.515 [2024-12-16 19:29:22.679525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.515 [2024-12-16 19:29:22.725295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.515 [2024-12-16 19:29:22.725345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:38.515 [2024-12-16 19:29:22.725359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.706 ms 00:27:38.515 [2024-12-16 19:29:22.725368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.515 [2024-12-16 19:29:22.725421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.515 [2024-12-16 19:29:22.725432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:38.515 [2024-12-16 19:29:22.725447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:38.515 [2024-12-16 19:29:22.725456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.515 [2024-12-16 19:29:22.726097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.515 [2024-12-16 19:29:22.726136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:38.515 [2024-12-16 19:29:22.726147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:27:38.515 [2024-12-16 19:29:22.726156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.515 [2024-12-16 19:29:22.726345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.515 [2024-12-16 19:29:22.726357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:38.515 [2024-12-16 19:29:22.726369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:27:38.515 [2024-12-16 19:29:22.726378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.515 [2024-12-16 19:29:22.742678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.515 [2024-12-16 19:29:22.742721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:38.515 [2024-12-16 19:29:22.742732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.279 ms 00:27:38.515 [2024-12-16 19:29:22.742741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.515 [2024-12-16 19:29:22.757458] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:38.515 [2024-12-16 19:29:22.757505] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:38.515 [2024-12-16 19:29:22.757519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.515 [2024-12-16 19:29:22.757529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:38.515 [2024-12-16 19:29:22.757539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.663 ms 00:27:38.515 [2024-12-16 19:29:22.757546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.515 [2024-12-16 19:29:22.784963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.515 [2024-12-16 19:29:22.785010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:38.515 [2024-12-16 19:29:22.785023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.356 ms 00:27:38.515 [2024-12-16 19:29:22.785032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.515 [2024-12-16 19:29:22.798900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.515 [2024-12-16 19:29:22.798947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:38.515 [2024-12-16 19:29:22.798958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.806 ms 00:27:38.515 [2024-12-16 19:29:22.798966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.515 [2024-12-16 19:29:22.811706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.515 [2024-12-16 19:29:22.811747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:38.515 [2024-12-16 19:29:22.811758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.686 ms 00:27:38.515 [2024-12-16 19:29:22.811766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.515 [2024-12-16 19:29:22.812456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.515 [2024-12-16 19:29:22.812484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:38.515 [2024-12-16 19:29:22.812499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:27:38.515 [2024-12-16 19:29:22.812508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.777 [2024-12-16 19:29:22.878887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.777 [2024-12-16 19:29:22.878948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:38.777 [2024-12-16 19:29:22.878972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.357 ms 00:27:38.777 [2024-12-16 19:29:22.878981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.777 [2024-12-16 19:29:22.890541] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:38.777 [2024-12-16 19:29:22.893620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.777 [2024-12-16 19:29:22.893660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:38.777 [2024-12-16 19:29:22.893673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.577 ms 00:27:38.777 [2024-12-16 19:29:22.893682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.777 [2024-12-16 19:29:22.893783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.777 [2024-12-16 19:29:22.893796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:38.777 [2024-12-16 19:29:22.893806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:38.777 [2024-12-16 19:29:22.893818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.777 [2024-12-16 19:29:22.895714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.777 [2024-12-16 19:29:22.895762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:38.777 [2024-12-16 19:29:22.895774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.855 ms 00:27:38.777 [2024-12-16 19:29:22.895782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.777 [2024-12-16 19:29:22.895815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.777 [2024-12-16 19:29:22.895826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:38.777 [2024-12-16 19:29:22.895836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:38.777 [2024-12-16 19:29:22.895844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.777 [2024-12-16 19:29:22.895891] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:38.777 [2024-12-16 19:29:22.895903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.777 [2024-12-16 19:29:22.895912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:38.777 [2024-12-16 19:29:22.895920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:38.777 [2024-12-16 19:29:22.895928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.777 [2024-12-16 19:29:22.922451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.777 [2024-12-16 19:29:22.922497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:38.777 [2024-12-16 19:29:22.922516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.504 ms 00:27:38.777 [2024-12-16 19:29:22.922525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.777 [2024-12-16 19:29:22.922623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.777 [2024-12-16 19:29:22.922633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:38.777 [2024-12-16 19:29:22.922642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:38.777 [2024-12-16 19:29:22.922650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.777 [2024-12-16 19:29:22.923961] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 309.040 ms, result 0 00:27:40.161  [2024-12-16T19:29:25.457Z] Copying: 1052/1048576 [kB] (1052 kBps) [2024-12-16T19:29:26.400Z] Copying: 5372/1048576 [kB] (4320 kBps) [2024-12-16T19:29:27.342Z] Copying: 31/1024 [MB] (25 MBps) [2024-12-16T19:29:28.285Z] Copying: 63/1024 [MB] (32 MBps) [2024-12-16T19:29:29.228Z] Copying: 94/1024 [MB] (30 MBps) [2024-12-16T19:29:30.171Z] Copying: 120/1024 [MB] (26 MBps) [2024-12-16T19:29:31.116Z] Copying: 143/1024 [MB] (23 MBps) [2024-12-16T19:29:32.502Z] Copying: 174/1024 [MB] (30 MBps) [2024-12-16T19:29:33.446Z] Copying: 209/1024 [MB] (35 MBps) [2024-12-16T19:29:34.390Z] Copying: 241/1024 [MB] (31 MBps) [2024-12-16T19:29:35.333Z] Copying: 271/1024 [MB] (29 MBps) [2024-12-16T19:29:36.275Z] Copying: 302/1024 [MB] (31 MBps) [2024-12-16T19:29:37.217Z] Copying: 332/1024 [MB] (30 MBps) [2024-12-16T19:29:38.212Z] Copying: 363/1024 [MB] (31 MBps) [2024-12-16T19:29:39.187Z] Copying: 394/1024 [MB] (30 MBps) [2024-12-16T19:29:40.133Z] Copying: 431/1024 [MB] (37 MBps) [2024-12-16T19:29:41.519Z] Copying: 464/1024 [MB] (33 MBps) [2024-12-16T19:29:42.462Z] Copying: 500/1024 [MB] (36 MBps) [2024-12-16T19:29:43.406Z] Copying: 540/1024 [MB] (39 MBps) [2024-12-16T19:29:44.349Z] Copying: 573/1024 [MB] (33 MBps) [2024-12-16T19:29:45.296Z] Copying: 604/1024 [MB] (30 MBps) [2024-12-16T19:29:46.241Z] Copying: 640/1024 [MB] (36 MBps) [2024-12-16T19:29:47.184Z] Copying: 674/1024 [MB] (33 MBps) [2024-12-16T19:29:48.127Z] Copying: 709/1024 [MB] (35 MBps) [2024-12-16T19:29:49.513Z] Copying: 744/1024 [MB] (34 MBps) [2024-12-16T19:29:50.455Z] Copying: 778/1024 [MB] (34 MBps) [2024-12-16T19:29:51.396Z] Copying: 805/1024 [MB] (26 MBps) [2024-12-16T19:29:52.340Z] Copying: 841/1024 [MB] (35 MBps) [2024-12-16T19:29:53.284Z] Copying: 872/1024 [MB] (31 MBps) [2024-12-16T19:29:54.226Z] Copying: 901/1024 [MB] (29 MBps) [2024-12-16T19:29:55.171Z] Copying: 940/1024 [MB] (38 MBps) [2024-12-16T19:29:56.114Z] Copying: 968/1024 [MB] (28 MBps) [2024-12-16T19:29:57.057Z] Copying: 996/1024 [MB] (28 MBps) [2024-12-16T19:29:57.057Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-12-16 19:29:57.033415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.703 [2024-12-16 19:29:57.033497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:12.703 [2024-12-16 19:29:57.033514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:12.703 [2024-12-16 19:29:57.033524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.703 [2024-12-16 19:29:57.033547] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:12.703 [2024-12-16 19:29:57.036943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.703 [2024-12-16 19:29:57.036993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:12.703 [2024-12-16 19:29:57.037006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.380 ms 00:28:12.703 [2024-12-16 19:29:57.037014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.703 [2024-12-16 19:29:57.037317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.703 [2024-12-16 19:29:57.037336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:12.703 [2024-12-16 19:29:57.037347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:28:12.703 [2024-12-16 19:29:57.037355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.703 [2024-12-16 19:29:57.050880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.703 [2024-12-16 19:29:57.050941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:12.703 [2024-12-16 19:29:57.050954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.507 ms 00:28:12.703 [2024-12-16 19:29:57.050962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.965 [2024-12-16 19:29:57.057113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.965 [2024-12-16 19:29:57.057163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:12.965 [2024-12-16 19:29:57.057194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.106 ms 00:28:12.965 [2024-12-16 19:29:57.057203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.965 [2024-12-16 19:29:57.084239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.965 [2024-12-16 19:29:57.084307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:12.965 [2024-12-16 19:29:57.084320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.981 ms 00:28:12.965 [2024-12-16 19:29:57.084328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.965 [2024-12-16 19:29:57.100283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.965 [2024-12-16 19:29:57.100332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:12.965 [2024-12-16 19:29:57.100345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.905 ms 00:28:12.965 [2024-12-16 19:29:57.100353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.965 [2024-12-16 19:29:57.105233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.965 [2024-12-16 19:29:57.105283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:12.965 [2024-12-16 19:29:57.105294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.826 ms 00:28:12.965 [2024-12-16 19:29:57.105310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.965 [2024-12-16 19:29:57.131466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.965 [2024-12-16 19:29:57.131517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:12.965 [2024-12-16 19:29:57.131529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.138 ms 00:28:12.965 [2024-12-16 19:29:57.131537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.965 [2024-12-16 19:29:57.156663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.965 [2024-12-16 19:29:57.156709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:12.965 [2024-12-16 19:29:57.156720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.079 ms 00:28:12.965 [2024-12-16 19:29:57.156727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.965 [2024-12-16 19:29:57.181773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.965 [2024-12-16 19:29:57.181818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:12.965 [2024-12-16 19:29:57.181829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.997 ms 00:28:12.965 [2024-12-16 19:29:57.181836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.965 [2024-12-16 19:29:57.206561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.965 [2024-12-16 19:29:57.206607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:12.965 [2024-12-16 19:29:57.206618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.653 ms 00:28:12.965 [2024-12-16 19:29:57.206625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.965 [2024-12-16 19:29:57.206669] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:12.966 [2024-12-16 19:29:57.206686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:12.966 [2024-12-16 19:29:57.206698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:12.966 [2024-12-16 19:29:57.206707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.206992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:12.966 [2024-12-16 19:29:57.207432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:12.967 [2024-12-16 19:29:57.207440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:12.967 [2024-12-16 19:29:57.207448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:12.967 [2024-12-16 19:29:57.207456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:12.967 [2024-12-16 19:29:57.207464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:12.967 [2024-12-16 19:29:57.207471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:12.967 [2024-12-16 19:29:57.207479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:12.967 [2024-12-16 19:29:57.207486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:12.967 [2024-12-16 19:29:57.207494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:12.967 [2024-12-16 19:29:57.207510] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:12.967 [2024-12-16 19:29:57.207519] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 216c5858-1f24-4a0e-85c5-3daa64fc07f6 00:28:12.967 [2024-12-16 19:29:57.207528] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:12.967 [2024-12-16 19:29:57.207536] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 153024 00:28:12.967 [2024-12-16 19:29:57.207547] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 151040 00:28:12.967 [2024-12-16 19:29:57.207556] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0131 00:28:12.967 [2024-12-16 19:29:57.207563] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:12.967 [2024-12-16 19:29:57.207580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:12.967 [2024-12-16 19:29:57.207588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:12.967 [2024-12-16 19:29:57.207595] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:12.967 [2024-12-16 19:29:57.207601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:12.967 [2024-12-16 19:29:57.207609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.967 [2024-12-16 19:29:57.207617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:12.967 [2024-12-16 19:29:57.207626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.941 ms 00:28:12.967 [2024-12-16 19:29:57.207634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.967 [2024-12-16 19:29:57.221258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.967 [2024-12-16 19:29:57.221305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:12.967 [2024-12-16 19:29:57.221316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.604 ms 00:28:12.967 [2024-12-16 19:29:57.221324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.967 [2024-12-16 19:29:57.221726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.967 [2024-12-16 19:29:57.221755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:12.967 [2024-12-16 19:29:57.221766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:28:12.967 [2024-12-16 19:29:57.221773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.967 [2024-12-16 19:29:57.258122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.967 [2024-12-16 19:29:57.258204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:12.967 [2024-12-16 19:29:57.258217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.967 [2024-12-16 19:29:57.258225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.967 [2024-12-16 19:29:57.258284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.967 [2024-12-16 19:29:57.258294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:12.967 [2024-12-16 19:29:57.258302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.967 [2024-12-16 19:29:57.258311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.967 [2024-12-16 19:29:57.258398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.967 [2024-12-16 19:29:57.258410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:12.967 [2024-12-16 19:29:57.258418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.967 [2024-12-16 19:29:57.258426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.967 [2024-12-16 19:29:57.258442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.967 [2024-12-16 19:29:57.258450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:12.967 [2024-12-16 19:29:57.258458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.967 [2024-12-16 19:29:57.258466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.228 [2024-12-16 19:29:57.342119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.228 [2024-12-16 19:29:57.342196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:13.228 [2024-12-16 19:29:57.342210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.228 [2024-12-16 19:29:57.342219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.228 [2024-12-16 19:29:57.411432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.228 [2024-12-16 19:29:57.411494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:13.228 [2024-12-16 19:29:57.411507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.228 [2024-12-16 19:29:57.411516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.228 [2024-12-16 19:29:57.411577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.228 [2024-12-16 19:29:57.411594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:13.228 [2024-12-16 19:29:57.411603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.228 [2024-12-16 19:29:57.411612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.228 [2024-12-16 19:29:57.411674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.228 [2024-12-16 19:29:57.411686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:13.228 [2024-12-16 19:29:57.411694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.228 [2024-12-16 19:29:57.411703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.228 [2024-12-16 19:29:57.411800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.228 [2024-12-16 19:29:57.411810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:13.228 [2024-12-16 19:29:57.411822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.228 [2024-12-16 19:29:57.411830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.228 [2024-12-16 19:29:57.411865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.228 [2024-12-16 19:29:57.411875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:13.228 [2024-12-16 19:29:57.411883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.228 [2024-12-16 19:29:57.411891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.228 [2024-12-16 19:29:57.411933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.228 [2024-12-16 19:29:57.411942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:13.228 [2024-12-16 19:29:57.411954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.228 [2024-12-16 19:29:57.411962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.228 [2024-12-16 19:29:57.412015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.228 [2024-12-16 19:29:57.412026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:13.228 [2024-12-16 19:29:57.412034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.228 [2024-12-16 19:29:57.412042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.228 [2024-12-16 19:29:57.412207] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 378.734 ms, result 0 00:28:14.171 00:28:14.172 00:28:14.172 19:29:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:15.556 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:15.556 19:29:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:15.556 [2024-12-16 19:29:59.866304] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:28:15.557 [2024-12-16 19:29:59.866427] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83656 ] 00:28:15.817 [2024-12-16 19:30:00.028551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.817 [2024-12-16 19:30:00.146768] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.419 [2024-12-16 19:30:00.443530] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:16.419 [2024-12-16 19:30:00.443620] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:16.419 [2024-12-16 19:30:00.606846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.419 [2024-12-16 19:30:00.606918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:16.419 [2024-12-16 19:30:00.606933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:16.419 [2024-12-16 19:30:00.606942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.419 [2024-12-16 19:30:00.606998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.419 [2024-12-16 19:30:00.607012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:16.419 [2024-12-16 19:30:00.607021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:16.419 [2024-12-16 19:30:00.607029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.419 [2024-12-16 19:30:00.607049] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:16.419 [2024-12-16 19:30:00.607802] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:16.419 [2024-12-16 19:30:00.607831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.419 [2024-12-16 19:30:00.607840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:16.419 [2024-12-16 19:30:00.607850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:28:16.419 [2024-12-16 19:30:00.607859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.419 [2024-12-16 19:30:00.609545] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:16.419 [2024-12-16 19:30:00.623428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.419 [2024-12-16 19:30:00.623482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:16.419 [2024-12-16 19:30:00.623494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.886 ms 00:28:16.419 [2024-12-16 19:30:00.623502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.419 [2024-12-16 19:30:00.623586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.419 [2024-12-16 19:30:00.623596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:16.419 [2024-12-16 19:30:00.623605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:28:16.419 [2024-12-16 19:30:00.623613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.419 [2024-12-16 19:30:00.631993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.419 [2024-12-16 19:30:00.632043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:16.419 [2024-12-16 19:30:00.632059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.301 ms 00:28:16.419 [2024-12-16 19:30:00.632068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.419 [2024-12-16 19:30:00.632150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.419 [2024-12-16 19:30:00.632160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:16.419 [2024-12-16 19:30:00.632187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:28:16.419 [2024-12-16 19:30:00.632196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.419 [2024-12-16 19:30:00.632243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.419 [2024-12-16 19:30:00.632253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:16.420 [2024-12-16 19:30:00.632261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:16.420 [2024-12-16 19:30:00.632272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.420 [2024-12-16 19:30:00.632296] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:16.420 [2024-12-16 19:30:00.636426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.420 [2024-12-16 19:30:00.636470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:16.420 [2024-12-16 19:30:00.636481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.135 ms 00:28:16.420 [2024-12-16 19:30:00.636489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.420 [2024-12-16 19:30:00.636527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.420 [2024-12-16 19:30:00.636536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:16.420 [2024-12-16 19:30:00.636545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:16.420 [2024-12-16 19:30:00.636552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.420 [2024-12-16 19:30:00.636604] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:16.420 [2024-12-16 19:30:00.636629] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:16.420 [2024-12-16 19:30:00.636670] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:16.420 [2024-12-16 19:30:00.636688] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:16.420 [2024-12-16 19:30:00.636795] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:16.420 [2024-12-16 19:30:00.636806] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:16.420 [2024-12-16 19:30:00.636818] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:16.420 [2024-12-16 19:30:00.636828] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:16.420 [2024-12-16 19:30:00.636837] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:16.420 [2024-12-16 19:30:00.636846] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:16.420 [2024-12-16 19:30:00.636854] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:16.420 [2024-12-16 19:30:00.636866] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:16.420 [2024-12-16 19:30:00.636873] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:16.420 [2024-12-16 19:30:00.636881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.420 [2024-12-16 19:30:00.636890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:16.420 [2024-12-16 19:30:00.636898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:28:16.420 [2024-12-16 19:30:00.636905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.420 [2024-12-16 19:30:00.636990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.420 [2024-12-16 19:30:00.636999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:16.420 [2024-12-16 19:30:00.637008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:28:16.420 [2024-12-16 19:30:00.637015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.420 [2024-12-16 19:30:00.637116] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:16.420 [2024-12-16 19:30:00.637126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:16.420 [2024-12-16 19:30:00.637134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:16.420 [2024-12-16 19:30:00.637142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:16.420 [2024-12-16 19:30:00.637151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:16.420 [2024-12-16 19:30:00.637158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:16.420 [2024-12-16 19:30:00.637166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:16.420 [2024-12-16 19:30:00.637191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:16.420 [2024-12-16 19:30:00.637198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:16.420 [2024-12-16 19:30:00.637205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:16.420 [2024-12-16 19:30:00.637212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:16.420 [2024-12-16 19:30:00.637222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:16.420 [2024-12-16 19:30:00.637229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:16.420 [2024-12-16 19:30:00.637244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:16.420 [2024-12-16 19:30:00.637251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:16.420 [2024-12-16 19:30:00.637258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:16.420 [2024-12-16 19:30:00.637266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:16.420 [2024-12-16 19:30:00.637273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:16.420 [2024-12-16 19:30:00.637280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:16.420 [2024-12-16 19:30:00.637287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:16.420 [2024-12-16 19:30:00.637294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:16.420 [2024-12-16 19:30:00.637301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:16.420 [2024-12-16 19:30:00.637308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:16.420 [2024-12-16 19:30:00.637315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:16.420 [2024-12-16 19:30:00.637321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:16.420 [2024-12-16 19:30:00.637329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:16.420 [2024-12-16 19:30:00.637336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:16.420 [2024-12-16 19:30:00.637343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:16.420 [2024-12-16 19:30:00.637350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:16.420 [2024-12-16 19:30:00.637356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:16.420 [2024-12-16 19:30:00.637363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:16.420 [2024-12-16 19:30:00.637369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:16.420 [2024-12-16 19:30:00.637376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:16.420 [2024-12-16 19:30:00.637382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:16.420 [2024-12-16 19:30:00.637388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:16.420 [2024-12-16 19:30:00.637396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:16.420 [2024-12-16 19:30:00.637402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:16.420 [2024-12-16 19:30:00.637409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:16.420 [2024-12-16 19:30:00.637415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:16.420 [2024-12-16 19:30:00.637422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:16.420 [2024-12-16 19:30:00.637428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:16.420 [2024-12-16 19:30:00.637435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:16.420 [2024-12-16 19:30:00.637441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:16.420 [2024-12-16 19:30:00.637449] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:16.420 [2024-12-16 19:30:00.637458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:16.420 [2024-12-16 19:30:00.637466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:16.420 [2024-12-16 19:30:00.637473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:16.420 [2024-12-16 19:30:00.637481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:16.420 [2024-12-16 19:30:00.637488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:16.420 [2024-12-16 19:30:00.637494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:16.420 [2024-12-16 19:30:00.637501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:16.420 [2024-12-16 19:30:00.637509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:16.420 [2024-12-16 19:30:00.637516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:16.420 [2024-12-16 19:30:00.637524] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:16.420 [2024-12-16 19:30:00.637536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:16.421 [2024-12-16 19:30:00.637545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:16.421 [2024-12-16 19:30:00.637553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:16.421 [2024-12-16 19:30:00.637559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:16.421 [2024-12-16 19:30:00.637568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:16.421 [2024-12-16 19:30:00.637576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:16.421 [2024-12-16 19:30:00.637584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:16.421 [2024-12-16 19:30:00.637591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:16.421 [2024-12-16 19:30:00.637599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:16.421 [2024-12-16 19:30:00.637606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:16.421 [2024-12-16 19:30:00.637613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:16.421 [2024-12-16 19:30:00.637620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:16.421 [2024-12-16 19:30:00.637627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:16.421 [2024-12-16 19:30:00.637635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:16.421 [2024-12-16 19:30:00.637642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:16.421 [2024-12-16 19:30:00.637649] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:16.421 [2024-12-16 19:30:00.637657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:16.421 [2024-12-16 19:30:00.637665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:16.421 [2024-12-16 19:30:00.637672] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:16.421 [2024-12-16 19:30:00.637680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:16.421 [2024-12-16 19:30:00.637687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:16.421 [2024-12-16 19:30:00.637701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.421 [2024-12-16 19:30:00.637709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:16.421 [2024-12-16 19:30:00.637716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:28:16.421 [2024-12-16 19:30:00.637723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.421 [2024-12-16 19:30:00.669929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.421 [2024-12-16 19:30:00.669980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:16.421 [2024-12-16 19:30:00.669995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.161 ms 00:28:16.421 [2024-12-16 19:30:00.670003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.421 [2024-12-16 19:30:00.670088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.421 [2024-12-16 19:30:00.670097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:16.421 [2024-12-16 19:30:00.670106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:28:16.421 [2024-12-16 19:30:00.670117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.421 [2024-12-16 19:30:00.716350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.421 [2024-12-16 19:30:00.716409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:16.421 [2024-12-16 19:30:00.716423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.145 ms 00:28:16.421 [2024-12-16 19:30:00.716431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.421 [2024-12-16 19:30:00.716483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.421 [2024-12-16 19:30:00.716497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:16.421 [2024-12-16 19:30:00.716507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:16.421 [2024-12-16 19:30:00.716515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.421 [2024-12-16 19:30:00.717105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.421 [2024-12-16 19:30:00.717140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:16.421 [2024-12-16 19:30:00.717151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:28:16.421 [2024-12-16 19:30:00.717160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.421 [2024-12-16 19:30:00.717339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.421 [2024-12-16 19:30:00.717353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:16.421 [2024-12-16 19:30:00.717363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:28:16.421 [2024-12-16 19:30:00.717371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.421 [2024-12-16 19:30:00.733064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.421 [2024-12-16 19:30:00.733115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:16.421 [2024-12-16 19:30:00.733126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.672 ms 00:28:16.421 [2024-12-16 19:30:00.733135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.723 [2024-12-16 19:30:00.747134] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:16.723 [2024-12-16 19:30:00.747194] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:16.723 [2024-12-16 19:30:00.747208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.723 [2024-12-16 19:30:00.747217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:16.723 [2024-12-16 19:30:00.747227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.946 ms 00:28:16.723 [2024-12-16 19:30:00.747234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.723 [2024-12-16 19:30:00.772878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.723 [2024-12-16 19:30:00.772931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:16.723 [2024-12-16 19:30:00.772944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.592 ms 00:28:16.723 [2024-12-16 19:30:00.772952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.723 [2024-12-16 19:30:00.785818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.723 [2024-12-16 19:30:00.785870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:16.723 [2024-12-16 19:30:00.785882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.824 ms 00:28:16.723 [2024-12-16 19:30:00.785889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.723 [2024-12-16 19:30:00.798457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.723 [2024-12-16 19:30:00.798507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:16.723 [2024-12-16 19:30:00.798519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.524 ms 00:28:16.723 [2024-12-16 19:30:00.798527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.723 [2024-12-16 19:30:00.799196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.723 [2024-12-16 19:30:00.799231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:16.723 [2024-12-16 19:30:00.799242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:28:16.723 [2024-12-16 19:30:00.799251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.723 [2024-12-16 19:30:00.864283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.723 [2024-12-16 19:30:00.864355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:16.723 [2024-12-16 19:30:00.864370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.012 ms 00:28:16.723 [2024-12-16 19:30:00.864378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.723 [2024-12-16 19:30:00.875619] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:16.723 [2024-12-16 19:30:00.878629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.723 [2024-12-16 19:30:00.878671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:16.723 [2024-12-16 19:30:00.878684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.197 ms 00:28:16.723 [2024-12-16 19:30:00.878694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.723 [2024-12-16 19:30:00.878779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.723 [2024-12-16 19:30:00.878790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:16.723 [2024-12-16 19:30:00.878802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:16.723 [2024-12-16 19:30:00.878811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.723 [2024-12-16 19:30:00.879694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.723 [2024-12-16 19:30:00.879741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:16.723 [2024-12-16 19:30:00.879753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:28:16.723 [2024-12-16 19:30:00.879762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.723 [2024-12-16 19:30:00.879791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.723 [2024-12-16 19:30:00.879801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:16.723 [2024-12-16 19:30:00.879810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:16.723 [2024-12-16 19:30:00.879824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.723 [2024-12-16 19:30:00.879866] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:16.723 [2024-12-16 19:30:00.879877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.723 [2024-12-16 19:30:00.879890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:16.723 [2024-12-16 19:30:00.879900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:16.723 [2024-12-16 19:30:00.879910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.723 [2024-12-16 19:30:00.905534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.723 [2024-12-16 19:30:00.905586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:16.723 [2024-12-16 19:30:00.905604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.604 ms 00:28:16.723 [2024-12-16 19:30:00.905613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.723 [2024-12-16 19:30:00.905697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.723 [2024-12-16 19:30:00.905707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:16.723 [2024-12-16 19:30:00.905716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:16.723 [2024-12-16 19:30:00.905724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.723 [2024-12-16 19:30:00.907166] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 299.837 ms, result 0 00:28:18.116  [2024-12-16T19:30:03.413Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-16T19:30:04.355Z] Copying: 23/1024 [MB] (10 MBps) [2024-12-16T19:30:05.298Z] Copying: 37/1024 [MB] (13 MBps) [2024-12-16T19:30:06.242Z] Copying: 52/1024 [MB] (14 MBps) [2024-12-16T19:30:07.184Z] Copying: 68/1024 [MB] (15 MBps) [2024-12-16T19:30:08.127Z] Copying: 88/1024 [MB] (19 MBps) [2024-12-16T19:30:09.511Z] Copying: 104/1024 [MB] (16 MBps) [2024-12-16T19:30:10.453Z] Copying: 115/1024 [MB] (10 MBps) [2024-12-16T19:30:11.396Z] Copying: 125/1024 [MB] (10 MBps) [2024-12-16T19:30:12.340Z] Copying: 147/1024 [MB] (21 MBps) [2024-12-16T19:30:13.282Z] Copying: 171/1024 [MB] (24 MBps) [2024-12-16T19:30:14.225Z] Copying: 190/1024 [MB] (18 MBps) [2024-12-16T19:30:15.168Z] Copying: 210/1024 [MB] (20 MBps) [2024-12-16T19:30:16.109Z] Copying: 227/1024 [MB] (16 MBps) [2024-12-16T19:30:17.496Z] Copying: 249/1024 [MB] (21 MBps) [2024-12-16T19:30:18.439Z] Copying: 267/1024 [MB] (18 MBps) [2024-12-16T19:30:19.382Z] Copying: 289/1024 [MB] (21 MBps) [2024-12-16T19:30:20.326Z] Copying: 311/1024 [MB] (21 MBps) [2024-12-16T19:30:21.269Z] Copying: 330/1024 [MB] (19 MBps) [2024-12-16T19:30:22.212Z] Copying: 353/1024 [MB] (22 MBps) [2024-12-16T19:30:23.154Z] Copying: 367/1024 [MB] (14 MBps) [2024-12-16T19:30:24.157Z] Copying: 388/1024 [MB] (20 MBps) [2024-12-16T19:30:25.104Z] Copying: 405/1024 [MB] (17 MBps) [2024-12-16T19:30:26.488Z] Copying: 423/1024 [MB] (18 MBps) [2024-12-16T19:30:27.430Z] Copying: 438/1024 [MB] (14 MBps) [2024-12-16T19:30:28.373Z] Copying: 453/1024 [MB] (15 MBps) [2024-12-16T19:30:29.315Z] Copying: 473/1024 [MB] (20 MBps) [2024-12-16T19:30:30.258Z] Copying: 484/1024 [MB] (10 MBps) [2024-12-16T19:30:31.202Z] Copying: 494/1024 [MB] (10 MBps) [2024-12-16T19:30:32.144Z] Copying: 516/1024 [MB] (22 MBps) [2024-12-16T19:30:33.087Z] Copying: 538/1024 [MB] (22 MBps) [2024-12-16T19:30:34.474Z] Copying: 553/1024 [MB] (14 MBps) [2024-12-16T19:30:35.417Z] Copying: 570/1024 [MB] (16 MBps) [2024-12-16T19:30:36.358Z] Copying: 585/1024 [MB] (15 MBps) [2024-12-16T19:30:37.302Z] Copying: 600/1024 [MB] (14 MBps) [2024-12-16T19:30:38.246Z] Copying: 615/1024 [MB] (15 MBps) [2024-12-16T19:30:39.191Z] Copying: 626/1024 [MB] (10 MBps) [2024-12-16T19:30:40.133Z] Copying: 636/1024 [MB] (10 MBps) [2024-12-16T19:30:41.108Z] Copying: 647/1024 [MB] (10 MBps) [2024-12-16T19:30:42.494Z] Copying: 667/1024 [MB] (19 MBps) [2024-12-16T19:30:43.438Z] Copying: 682/1024 [MB] (15 MBps) [2024-12-16T19:30:44.383Z] Copying: 698/1024 [MB] (15 MBps) [2024-12-16T19:30:45.326Z] Copying: 713/1024 [MB] (14 MBps) [2024-12-16T19:30:46.268Z] Copying: 733/1024 [MB] (20 MBps) [2024-12-16T19:30:47.230Z] Copying: 747/1024 [MB] (14 MBps) [2024-12-16T19:30:48.186Z] Copying: 763/1024 [MB] (15 MBps) [2024-12-16T19:30:49.129Z] Copying: 775/1024 [MB] (12 MBps) [2024-12-16T19:30:50.516Z] Copying: 786/1024 [MB] (10 MBps) [2024-12-16T19:30:51.087Z] Copying: 796/1024 [MB] (10 MBps) [2024-12-16T19:30:52.474Z] Copying: 807/1024 [MB] (10 MBps) [2024-12-16T19:30:53.417Z] Copying: 817/1024 [MB] (10 MBps) [2024-12-16T19:30:54.360Z] Copying: 828/1024 [MB] (10 MBps) [2024-12-16T19:30:55.304Z] Copying: 839/1024 [MB] (11 MBps) [2024-12-16T19:30:56.247Z] Copying: 850/1024 [MB] (11 MBps) [2024-12-16T19:30:57.192Z] Copying: 861/1024 [MB] (10 MBps) [2024-12-16T19:30:58.135Z] Copying: 872/1024 [MB] (10 MBps) [2024-12-16T19:30:59.522Z] Copying: 883/1024 [MB] (10 MBps) [2024-12-16T19:31:00.095Z] Copying: 895/1024 [MB] (11 MBps) [2024-12-16T19:31:01.483Z] Copying: 911/1024 [MB] (16 MBps) [2024-12-16T19:31:02.427Z] Copying: 928/1024 [MB] (16 MBps) [2024-12-16T19:31:03.372Z] Copying: 941/1024 [MB] (12 MBps) [2024-12-16T19:31:04.315Z] Copying: 951/1024 [MB] (10 MBps) [2024-12-16T19:31:05.270Z] Copying: 969/1024 [MB] (17 MBps) [2024-12-16T19:31:06.213Z] Copying: 985/1024 [MB] (16 MBps) [2024-12-16T19:31:07.158Z] Copying: 999/1024 [MB] (13 MBps) [2024-12-16T19:31:07.732Z] Copying: 1012/1024 [MB] (13 MBps) [2024-12-16T19:31:07.732Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-16 19:31:07.494803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.378 [2024-12-16 19:31:07.494879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:23.378 [2024-12-16 19:31:07.494898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:23.378 [2024-12-16 19:31:07.494910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.378 [2024-12-16 19:31:07.494941] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:23.378 [2024-12-16 19:31:07.498711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.378 [2024-12-16 19:31:07.498849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:23.378 [2024-12-16 19:31:07.498912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.747 ms 00:29:23.379 [2024-12-16 19:31:07.498939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.379 [2024-12-16 19:31:07.499228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.379 [2024-12-16 19:31:07.499261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:23.379 [2024-12-16 19:31:07.499286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:29:23.379 [2024-12-16 19:31:07.499296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.379 [2024-12-16 19:31:07.503371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.379 [2024-12-16 19:31:07.503396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:23.379 [2024-12-16 19:31:07.503412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.060 ms 00:29:23.379 [2024-12-16 19:31:07.503422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.379 [2024-12-16 19:31:07.510061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.379 [2024-12-16 19:31:07.510094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:23.379 [2024-12-16 19:31:07.510104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.619 ms 00:29:23.379 [2024-12-16 19:31:07.510113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.379 [2024-12-16 19:31:07.534999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.379 [2024-12-16 19:31:07.535036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:23.379 [2024-12-16 19:31:07.535047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.835 ms 00:29:23.379 [2024-12-16 19:31:07.535054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.379 [2024-12-16 19:31:07.550346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.379 [2024-12-16 19:31:07.550384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:23.379 [2024-12-16 19:31:07.550395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.255 ms 00:29:23.379 [2024-12-16 19:31:07.550408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.379 [2024-12-16 19:31:07.554979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.379 [2024-12-16 19:31:07.555019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:23.379 [2024-12-16 19:31:07.555030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.529 ms 00:29:23.379 [2024-12-16 19:31:07.555037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.379 [2024-12-16 19:31:07.579747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.379 [2024-12-16 19:31:07.579785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:23.379 [2024-12-16 19:31:07.579796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.694 ms 00:29:23.379 [2024-12-16 19:31:07.579803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.379 [2024-12-16 19:31:07.604654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.379 [2024-12-16 19:31:07.604696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:23.379 [2024-12-16 19:31:07.604707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.808 ms 00:29:23.379 [2024-12-16 19:31:07.604714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.379 [2024-12-16 19:31:07.629522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.379 [2024-12-16 19:31:07.629566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:23.379 [2024-12-16 19:31:07.629577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.766 ms 00:29:23.379 [2024-12-16 19:31:07.629585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.379 [2024-12-16 19:31:07.654450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.379 [2024-12-16 19:31:07.654495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:23.379 [2024-12-16 19:31:07.654507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.794 ms 00:29:23.379 [2024-12-16 19:31:07.654514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.379 [2024-12-16 19:31:07.654566] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:23.379 [2024-12-16 19:31:07.654589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:23.379 [2024-12-16 19:31:07.654600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:23.379 [2024-12-16 19:31:07.654609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.654992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.655001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.655008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.655016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.655023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.655031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.655038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:23.379 [2024-12-16 19:31:07.655046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:23.380 [2024-12-16 19:31:07.655415] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:23.380 [2024-12-16 19:31:07.655423] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 216c5858-1f24-4a0e-85c5-3daa64fc07f6 00:29:23.380 [2024-12-16 19:31:07.655433] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:23.380 [2024-12-16 19:31:07.655442] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:23.380 [2024-12-16 19:31:07.655449] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:23.380 [2024-12-16 19:31:07.655457] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:23.380 [2024-12-16 19:31:07.655472] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:23.380 [2024-12-16 19:31:07.655481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:23.380 [2024-12-16 19:31:07.655489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:23.380 [2024-12-16 19:31:07.655496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:23.380 [2024-12-16 19:31:07.655504] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:23.380 [2024-12-16 19:31:07.655513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.380 [2024-12-16 19:31:07.655521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:23.380 [2024-12-16 19:31:07.655534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:29:23.380 [2024-12-16 19:31:07.655543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.380 [2024-12-16 19:31:07.668990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.380 [2024-12-16 19:31:07.669031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:23.380 [2024-12-16 19:31:07.669042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.429 ms 00:29:23.380 [2024-12-16 19:31:07.669050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.380 [2024-12-16 19:31:07.669468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.380 [2024-12-16 19:31:07.669489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:23.380 [2024-12-16 19:31:07.669499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:29:23.380 [2024-12-16 19:31:07.669507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.380 [2024-12-16 19:31:07.705950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.380 [2024-12-16 19:31:07.705999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:23.380 [2024-12-16 19:31:07.706012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.380 [2024-12-16 19:31:07.706021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.380 [2024-12-16 19:31:07.706087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.380 [2024-12-16 19:31:07.706098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:23.380 [2024-12-16 19:31:07.706108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.380 [2024-12-16 19:31:07.706117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.380 [2024-12-16 19:31:07.706228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.380 [2024-12-16 19:31:07.706243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:23.380 [2024-12-16 19:31:07.706253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.380 [2024-12-16 19:31:07.706262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.380 [2024-12-16 19:31:07.706279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.380 [2024-12-16 19:31:07.706294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:23.380 [2024-12-16 19:31:07.706303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.380 [2024-12-16 19:31:07.706311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.641 [2024-12-16 19:31:07.791351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.641 [2024-12-16 19:31:07.791408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:23.641 [2024-12-16 19:31:07.791421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.641 [2024-12-16 19:31:07.791430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.641 [2024-12-16 19:31:07.860851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.641 [2024-12-16 19:31:07.860914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:23.641 [2024-12-16 19:31:07.860926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.641 [2024-12-16 19:31:07.860935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.641 [2024-12-16 19:31:07.860995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.641 [2024-12-16 19:31:07.861005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:23.641 [2024-12-16 19:31:07.861014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.641 [2024-12-16 19:31:07.861023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.641 [2024-12-16 19:31:07.861079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.641 [2024-12-16 19:31:07.861090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:23.641 [2024-12-16 19:31:07.861104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.641 [2024-12-16 19:31:07.861112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.641 [2024-12-16 19:31:07.861241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.641 [2024-12-16 19:31:07.861257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:23.641 [2024-12-16 19:31:07.861267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.641 [2024-12-16 19:31:07.861276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.641 [2024-12-16 19:31:07.861313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.641 [2024-12-16 19:31:07.861323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:23.641 [2024-12-16 19:31:07.861331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.641 [2024-12-16 19:31:07.861343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.641 [2024-12-16 19:31:07.861392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.641 [2024-12-16 19:31:07.861404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:23.641 [2024-12-16 19:31:07.861413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.641 [2024-12-16 19:31:07.861421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.641 [2024-12-16 19:31:07.861469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.641 [2024-12-16 19:31:07.861481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:23.641 [2024-12-16 19:31:07.861492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.641 [2024-12-16 19:31:07.861501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.641 [2024-12-16 19:31:07.861632] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 366.810 ms, result 0 00:29:24.584 00:29:24.584 00:29:24.584 19:31:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:26.525 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:26.525 19:31:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:26.525 19:31:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:26.525 19:31:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:26.786 19:31:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:26.786 19:31:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:26.786 19:31:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:26.786 19:31:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:26.786 19:31:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81896 00:29:26.786 19:31:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81896 ']' 00:29:26.786 Process with pid 81896 is not found 00:29:26.786 19:31:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81896 00:29:26.786 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81896) - No such process 00:29:26.786 19:31:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81896 is not found' 00:29:26.786 19:31:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:27.359 19:31:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:27.359 Remove shared memory files 00:29:27.359 19:31:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:27.359 19:31:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:27.359 19:31:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:27.359 19:31:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:27.359 19:31:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:27.359 19:31:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:27.359 00:29:27.359 real 3m56.616s 00:29:27.359 user 4m19.530s 00:29:27.359 sys 0m26.204s 00:29:27.359 19:31:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:27.359 19:31:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:27.359 ************************************ 00:29:27.359 END TEST ftl_dirty_shutdown 00:29:27.359 ************************************ 00:29:27.359 19:31:11 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:27.359 19:31:11 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:27.359 19:31:11 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:27.359 19:31:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:27.359 ************************************ 00:29:27.359 START TEST ftl_upgrade_shutdown 00:29:27.359 ************************************ 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:27.359 * Looking for test storage... 00:29:27.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:27.359 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:27.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.360 --rc genhtml_branch_coverage=1 00:29:27.360 --rc genhtml_function_coverage=1 00:29:27.360 --rc genhtml_legend=1 00:29:27.360 --rc geninfo_all_blocks=1 00:29:27.360 --rc geninfo_unexecuted_blocks=1 00:29:27.360 00:29:27.360 ' 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:27.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.360 --rc genhtml_branch_coverage=1 00:29:27.360 --rc genhtml_function_coverage=1 00:29:27.360 --rc genhtml_legend=1 00:29:27.360 --rc geninfo_all_blocks=1 00:29:27.360 --rc geninfo_unexecuted_blocks=1 00:29:27.360 00:29:27.360 ' 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:27.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.360 --rc genhtml_branch_coverage=1 00:29:27.360 --rc genhtml_function_coverage=1 00:29:27.360 --rc genhtml_legend=1 00:29:27.360 --rc geninfo_all_blocks=1 00:29:27.360 --rc geninfo_unexecuted_blocks=1 00:29:27.360 00:29:27.360 ' 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:27.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.360 --rc genhtml_branch_coverage=1 00:29:27.360 --rc genhtml_function_coverage=1 00:29:27.360 --rc genhtml_legend=1 00:29:27.360 --rc geninfo_all_blocks=1 00:29:27.360 --rc geninfo_unexecuted_blocks=1 00:29:27.360 00:29:27.360 ' 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84444 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84444 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84444 ']' 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:27.360 19:31:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:27.622 [2024-12-16 19:31:11.765014] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:27.622 [2024-12-16 19:31:11.765645] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84444 ] 00:29:27.622 [2024-12-16 19:31:11.930724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.883 [2024-12-16 19:31:12.060820] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.455 19:31:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.455 19:31:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:28.455 19:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:28.455 19:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:28.455 19:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:28.455 19:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:28.455 19:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:28.455 19:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:28.455 19:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:28.455 19:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:28.455 19:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:28.455 19:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:28.455 19:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:28.455 19:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:28.455 19:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:28.455 19:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:28.455 19:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:28.455 19:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:28.455 19:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:28.455 19:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:28.455 19:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:28.455 19:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:28.455 19:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:29.027 19:31:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:29.027 19:31:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:29.027 19:31:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:29.027 19:31:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:29.027 19:31:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:29.027 19:31:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:29.027 19:31:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:29.027 19:31:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:29.027 19:31:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:29.027 { 00:29:29.027 "name": "basen1", 00:29:29.027 "aliases": [ 00:29:29.027 "6cc884a1-4460-4a80-b1c0-aef3d71dac9e" 00:29:29.027 ], 00:29:29.027 "product_name": "NVMe disk", 00:29:29.027 "block_size": 4096, 00:29:29.027 "num_blocks": 1310720, 00:29:29.027 "uuid": "6cc884a1-4460-4a80-b1c0-aef3d71dac9e", 00:29:29.027 "numa_id": -1, 00:29:29.027 "assigned_rate_limits": { 00:29:29.027 "rw_ios_per_sec": 0, 00:29:29.027 "rw_mbytes_per_sec": 0, 00:29:29.027 "r_mbytes_per_sec": 0, 00:29:29.027 "w_mbytes_per_sec": 0 00:29:29.027 }, 00:29:29.027 "claimed": true, 00:29:29.027 "claim_type": "read_many_write_one", 00:29:29.027 "zoned": false, 00:29:29.027 "supported_io_types": { 00:29:29.027 "read": true, 00:29:29.027 "write": true, 00:29:29.027 "unmap": true, 00:29:29.027 "flush": true, 00:29:29.027 "reset": true, 00:29:29.027 "nvme_admin": true, 00:29:29.027 "nvme_io": true, 00:29:29.027 "nvme_io_md": false, 00:29:29.027 "write_zeroes": true, 00:29:29.027 "zcopy": false, 00:29:29.027 "get_zone_info": false, 00:29:29.027 "zone_management": false, 00:29:29.027 "zone_append": false, 00:29:29.027 "compare": true, 00:29:29.027 "compare_and_write": false, 00:29:29.027 "abort": true, 00:29:29.027 "seek_hole": false, 00:29:29.027 "seek_data": false, 00:29:29.027 "copy": true, 00:29:29.027 "nvme_iov_md": false 00:29:29.027 }, 00:29:29.027 "driver_specific": { 00:29:29.027 "nvme": [ 00:29:29.027 { 00:29:29.027 "pci_address": "0000:00:11.0", 00:29:29.027 "trid": { 00:29:29.027 "trtype": "PCIe", 00:29:29.027 "traddr": "0000:00:11.0" 00:29:29.027 }, 00:29:29.027 "ctrlr_data": { 00:29:29.027 "cntlid": 0, 00:29:29.027 "vendor_id": "0x1b36", 00:29:29.027 "model_number": "QEMU NVMe Ctrl", 00:29:29.027 "serial_number": "12341", 00:29:29.027 "firmware_revision": "8.0.0", 00:29:29.027 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:29.027 "oacs": { 00:29:29.027 "security": 0, 00:29:29.027 "format": 1, 00:29:29.027 "firmware": 0, 00:29:29.027 "ns_manage": 1 00:29:29.027 }, 00:29:29.027 "multi_ctrlr": false, 00:29:29.027 "ana_reporting": false 00:29:29.027 }, 00:29:29.027 "vs": { 00:29:29.027 "nvme_version": "1.4" 00:29:29.027 }, 00:29:29.027 "ns_data": { 00:29:29.027 "id": 1, 00:29:29.027 "can_share": false 00:29:29.027 } 00:29:29.027 } 00:29:29.027 ], 00:29:29.027 "mp_policy": "active_passive" 00:29:29.027 } 00:29:29.027 } 00:29:29.027 ]' 00:29:29.027 19:31:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:29.027 19:31:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:29.027 19:31:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:29.027 19:31:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:29.027 19:31:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:29.027 19:31:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:29.027 19:31:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:29.027 19:31:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:29.027 19:31:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:29.027 19:31:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:29.027 19:31:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:29.288 19:31:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=7fc9b654-c291-424c-8958-19a308878252 00:29:29.288 19:31:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:29.288 19:31:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7fc9b654-c291-424c-8958-19a308878252 00:29:29.549 19:31:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:29.810 19:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=d6de574c-c813-48fc-be88-b3751b63f243 00:29:29.810 19:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u d6de574c-c813-48fc-be88-b3751b63f243 00:29:30.072 19:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=5e088db3-695f-451f-963c-b272afd5b043 00:29:30.072 19:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 5e088db3-695f-451f-963c-b272afd5b043 ]] 00:29:30.072 19:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 5e088db3-695f-451f-963c-b272afd5b043 5120 00:29:30.072 19:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:30.072 19:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:30.072 19:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=5e088db3-695f-451f-963c-b272afd5b043 00:29:30.072 19:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:30.072 19:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 5e088db3-695f-451f-963c-b272afd5b043 00:29:30.072 19:31:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=5e088db3-695f-451f-963c-b272afd5b043 00:29:30.072 19:31:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:30.072 19:31:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:30.072 19:31:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:30.072 19:31:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e088db3-695f-451f-963c-b272afd5b043 00:29:30.072 19:31:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:30.072 { 00:29:30.072 "name": "5e088db3-695f-451f-963c-b272afd5b043", 00:29:30.072 "aliases": [ 00:29:30.072 "lvs/basen1p0" 00:29:30.072 ], 00:29:30.072 "product_name": "Logical Volume", 00:29:30.072 "block_size": 4096, 00:29:30.072 "num_blocks": 5242880, 00:29:30.072 "uuid": "5e088db3-695f-451f-963c-b272afd5b043", 00:29:30.072 "assigned_rate_limits": { 00:29:30.072 "rw_ios_per_sec": 0, 00:29:30.072 "rw_mbytes_per_sec": 0, 00:29:30.072 "r_mbytes_per_sec": 0, 00:29:30.072 "w_mbytes_per_sec": 0 00:29:30.072 }, 00:29:30.072 "claimed": false, 00:29:30.072 "zoned": false, 00:29:30.072 "supported_io_types": { 00:29:30.072 "read": true, 00:29:30.072 "write": true, 00:29:30.072 "unmap": true, 00:29:30.072 "flush": false, 00:29:30.072 "reset": true, 00:29:30.072 "nvme_admin": false, 00:29:30.072 "nvme_io": false, 00:29:30.072 "nvme_io_md": false, 00:29:30.072 "write_zeroes": true, 00:29:30.072 "zcopy": false, 00:29:30.072 "get_zone_info": false, 00:29:30.072 "zone_management": false, 00:29:30.072 "zone_append": false, 00:29:30.072 "compare": false, 00:29:30.072 "compare_and_write": false, 00:29:30.072 "abort": false, 00:29:30.072 "seek_hole": true, 00:29:30.072 "seek_data": true, 00:29:30.072 "copy": false, 00:29:30.072 "nvme_iov_md": false 00:29:30.072 }, 00:29:30.072 "driver_specific": { 00:29:30.072 "lvol": { 00:29:30.072 "lvol_store_uuid": "d6de574c-c813-48fc-be88-b3751b63f243", 00:29:30.072 "base_bdev": "basen1", 00:29:30.072 "thin_provision": true, 00:29:30.072 "num_allocated_clusters": 0, 00:29:30.072 "snapshot": false, 00:29:30.072 "clone": false, 00:29:30.072 "esnap_clone": false 00:29:30.072 } 00:29:30.072 } 00:29:30.072 } 00:29:30.072 ]' 00:29:30.072 19:31:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:30.333 19:31:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:30.333 19:31:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:30.333 19:31:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:30.333 19:31:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:30.333 19:31:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:30.333 19:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:30.333 19:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:30.333 19:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:30.593 19:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:30.593 19:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:30.593 19:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:30.593 19:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:30.593 19:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:30.593 19:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 5e088db3-695f-451f-963c-b272afd5b043 -c cachen1p0 --l2p_dram_limit 2 00:29:30.857 [2024-12-16 19:31:15.101903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.857 [2024-12-16 19:31:15.101939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:30.857 [2024-12-16 19:31:15.101951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:30.857 [2024-12-16 19:31:15.101958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.857 [2024-12-16 19:31:15.102005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.857 [2024-12-16 19:31:15.102012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:30.857 [2024-12-16 19:31:15.102020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:29:30.857 [2024-12-16 19:31:15.102027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.857 [2024-12-16 19:31:15.102044] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:30.857 [2024-12-16 19:31:15.102637] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:30.857 [2024-12-16 19:31:15.102654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.857 [2024-12-16 19:31:15.102660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:30.857 [2024-12-16 19:31:15.102670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.612 ms 00:29:30.857 [2024-12-16 19:31:15.102676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.857 [2024-12-16 19:31:15.102700] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 91ef306d-1de1-4576-9514-e2069d79fd9f 00:29:30.857 [2024-12-16 19:31:15.103663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.857 [2024-12-16 19:31:15.103681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:30.857 [2024-12-16 19:31:15.103688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:29:30.857 [2024-12-16 19:31:15.103696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.857 [2024-12-16 19:31:15.108537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.857 [2024-12-16 19:31:15.108565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:30.857 [2024-12-16 19:31:15.108572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.781 ms 00:29:30.857 [2024-12-16 19:31:15.108579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.857 [2024-12-16 19:31:15.108609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.857 [2024-12-16 19:31:15.108618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:30.857 [2024-12-16 19:31:15.108625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:30.857 [2024-12-16 19:31:15.108633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.857 [2024-12-16 19:31:15.108671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.857 [2024-12-16 19:31:15.108679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:30.857 [2024-12-16 19:31:15.108686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:30.857 [2024-12-16 19:31:15.108696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.857 [2024-12-16 19:31:15.108713] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:30.857 [2024-12-16 19:31:15.111604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.857 [2024-12-16 19:31:15.111626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:30.857 [2024-12-16 19:31:15.111636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.894 ms 00:29:30.857 [2024-12-16 19:31:15.111642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.857 [2024-12-16 19:31:15.111664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.857 [2024-12-16 19:31:15.111671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:30.857 [2024-12-16 19:31:15.111678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:30.857 [2024-12-16 19:31:15.111684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.857 [2024-12-16 19:31:15.111698] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:30.857 [2024-12-16 19:31:15.111805] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:30.857 [2024-12-16 19:31:15.111817] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:30.857 [2024-12-16 19:31:15.111825] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:30.857 [2024-12-16 19:31:15.111834] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:30.857 [2024-12-16 19:31:15.111841] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:30.857 [2024-12-16 19:31:15.111848] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:30.857 [2024-12-16 19:31:15.111854] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:30.857 [2024-12-16 19:31:15.111863] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:30.857 [2024-12-16 19:31:15.111869] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:30.857 [2024-12-16 19:31:15.111876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.857 [2024-12-16 19:31:15.111881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:30.857 [2024-12-16 19:31:15.111888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.179 ms 00:29:30.857 [2024-12-16 19:31:15.111894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.857 [2024-12-16 19:31:15.111959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.857 [2024-12-16 19:31:15.111971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:30.857 [2024-12-16 19:31:15.111978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:30.857 [2024-12-16 19:31:15.111983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.857 [2024-12-16 19:31:15.112058] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:30.857 [2024-12-16 19:31:15.112065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:30.857 [2024-12-16 19:31:15.112072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:30.857 [2024-12-16 19:31:15.112078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:30.857 [2024-12-16 19:31:15.112085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:30.857 [2024-12-16 19:31:15.112090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:30.857 [2024-12-16 19:31:15.112097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:30.857 [2024-12-16 19:31:15.112102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:30.857 [2024-12-16 19:31:15.112109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:30.857 [2024-12-16 19:31:15.112114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:30.857 [2024-12-16 19:31:15.112120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:30.857 [2024-12-16 19:31:15.112127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:30.857 [2024-12-16 19:31:15.112134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:30.857 [2024-12-16 19:31:15.112139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:30.857 [2024-12-16 19:31:15.112145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:30.857 [2024-12-16 19:31:15.112150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:30.857 [2024-12-16 19:31:15.112158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:30.857 [2024-12-16 19:31:15.112165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:30.857 [2024-12-16 19:31:15.112185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:30.857 [2024-12-16 19:31:15.112191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:30.857 [2024-12-16 19:31:15.112197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:30.857 [2024-12-16 19:31:15.112202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:30.857 [2024-12-16 19:31:15.112209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:30.857 [2024-12-16 19:31:15.112214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:30.857 [2024-12-16 19:31:15.112220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:30.857 [2024-12-16 19:31:15.112225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:30.857 [2024-12-16 19:31:15.112231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:30.857 [2024-12-16 19:31:15.112236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:30.857 [2024-12-16 19:31:15.112243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:30.857 [2024-12-16 19:31:15.112248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:30.857 [2024-12-16 19:31:15.112255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:30.857 [2024-12-16 19:31:15.112260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:30.857 [2024-12-16 19:31:15.112267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:30.857 [2024-12-16 19:31:15.112272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:30.857 [2024-12-16 19:31:15.112279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:30.857 [2024-12-16 19:31:15.112284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:30.857 [2024-12-16 19:31:15.112290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:30.857 [2024-12-16 19:31:15.112295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:30.857 [2024-12-16 19:31:15.112303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:30.857 [2024-12-16 19:31:15.112308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:30.857 [2024-12-16 19:31:15.112314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:30.857 [2024-12-16 19:31:15.112319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:30.857 [2024-12-16 19:31:15.112325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:30.857 [2024-12-16 19:31:15.112330] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:30.857 [2024-12-16 19:31:15.112338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:30.857 [2024-12-16 19:31:15.112343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:30.857 [2024-12-16 19:31:15.112349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:30.857 [2024-12-16 19:31:15.112355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:30.857 [2024-12-16 19:31:15.112363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:30.857 [2024-12-16 19:31:15.112369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:30.857 [2024-12-16 19:31:15.112376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:30.857 [2024-12-16 19:31:15.112381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:30.857 [2024-12-16 19:31:15.112387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:30.857 [2024-12-16 19:31:15.112394] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:30.857 [2024-12-16 19:31:15.112402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:30.857 [2024-12-16 19:31:15.112410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:30.857 [2024-12-16 19:31:15.112417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:30.857 [2024-12-16 19:31:15.112422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:30.857 [2024-12-16 19:31:15.112429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:30.857 [2024-12-16 19:31:15.112434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:30.858 [2024-12-16 19:31:15.112441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:30.858 [2024-12-16 19:31:15.112446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:30.858 [2024-12-16 19:31:15.112453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:30.858 [2024-12-16 19:31:15.112459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:30.858 [2024-12-16 19:31:15.112468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:30.858 [2024-12-16 19:31:15.112473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:30.858 [2024-12-16 19:31:15.112480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:30.858 [2024-12-16 19:31:15.112485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:30.858 [2024-12-16 19:31:15.112492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:30.858 [2024-12-16 19:31:15.112498] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:30.858 [2024-12-16 19:31:15.112505] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:30.858 [2024-12-16 19:31:15.112511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:30.858 [2024-12-16 19:31:15.112518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:30.858 [2024-12-16 19:31:15.112523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:30.858 [2024-12-16 19:31:15.112530] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:30.858 [2024-12-16 19:31:15.112536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.858 [2024-12-16 19:31:15.112543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:30.858 [2024-12-16 19:31:15.112548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.531 ms 00:29:30.858 [2024-12-16 19:31:15.112556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.858 [2024-12-16 19:31:15.112584] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:30.858 [2024-12-16 19:31:15.112594] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:35.069 [2024-12-16 19:31:18.849510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.069 [2024-12-16 19:31:18.849553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:35.069 [2024-12-16 19:31:18.849565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3736.912 ms 00:29:35.069 [2024-12-16 19:31:18.849574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.069 [2024-12-16 19:31:18.869686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.069 [2024-12-16 19:31:18.869721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:35.069 [2024-12-16 19:31:18.869732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.941 ms 00:29:35.069 [2024-12-16 19:31:18.869739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.069 [2024-12-16 19:31:18.869794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.069 [2024-12-16 19:31:18.869804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:35.069 [2024-12-16 19:31:18.869811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:35.069 [2024-12-16 19:31:18.869821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.069 [2024-12-16 19:31:18.893627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.069 [2024-12-16 19:31:18.893656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:35.069 [2024-12-16 19:31:18.893664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.780 ms 00:29:35.069 [2024-12-16 19:31:18.893671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.069 [2024-12-16 19:31:18.893693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.069 [2024-12-16 19:31:18.893704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:35.069 [2024-12-16 19:31:18.893710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:35.070 [2024-12-16 19:31:18.893717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.070 [2024-12-16 19:31:18.894009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.070 [2024-12-16 19:31:18.894024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:35.070 [2024-12-16 19:31:18.894036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.258 ms 00:29:35.070 [2024-12-16 19:31:18.894044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.070 [2024-12-16 19:31:18.894074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.070 [2024-12-16 19:31:18.894082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:35.070 [2024-12-16 19:31:18.894090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:35.070 [2024-12-16 19:31:18.894099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.070 [2024-12-16 19:31:18.905322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.070 [2024-12-16 19:31:18.905347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:35.070 [2024-12-16 19:31:18.905354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.210 ms 00:29:35.070 [2024-12-16 19:31:18.905361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.070 [2024-12-16 19:31:18.928791] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:35.070 [2024-12-16 19:31:18.929708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.070 [2024-12-16 19:31:18.929739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:35.070 [2024-12-16 19:31:18.929755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.291 ms 00:29:35.070 [2024-12-16 19:31:18.929765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.070 [2024-12-16 19:31:18.951466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.070 [2024-12-16 19:31:18.951490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:35.070 [2024-12-16 19:31:18.951500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.660 ms 00:29:35.070 [2024-12-16 19:31:18.951506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.070 [2024-12-16 19:31:18.951571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.070 [2024-12-16 19:31:18.951581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:35.070 [2024-12-16 19:31:18.951590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:29:35.070 [2024-12-16 19:31:18.951597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.070 [2024-12-16 19:31:18.968635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.070 [2024-12-16 19:31:18.968657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:35.070 [2024-12-16 19:31:18.968667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.002 ms 00:29:35.070 [2024-12-16 19:31:18.968673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.070 [2024-12-16 19:31:18.985692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.070 [2024-12-16 19:31:18.985713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:35.070 [2024-12-16 19:31:18.985722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.986 ms 00:29:35.070 [2024-12-16 19:31:18.985727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.070 [2024-12-16 19:31:18.986158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.070 [2024-12-16 19:31:18.986178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:35.070 [2024-12-16 19:31:18.986187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.404 ms 00:29:35.070 [2024-12-16 19:31:18.986194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.070 [2024-12-16 19:31:19.045616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.070 [2024-12-16 19:31:19.045639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:35.070 [2024-12-16 19:31:19.045651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 59.397 ms 00:29:35.070 [2024-12-16 19:31:19.045658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.070 [2024-12-16 19:31:19.063862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.070 [2024-12-16 19:31:19.063885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:35.070 [2024-12-16 19:31:19.063895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.150 ms 00:29:35.070 [2024-12-16 19:31:19.063901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.070 [2024-12-16 19:31:19.081559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.070 [2024-12-16 19:31:19.081581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:35.070 [2024-12-16 19:31:19.081591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.630 ms 00:29:35.070 [2024-12-16 19:31:19.081596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.070 [2024-12-16 19:31:19.098929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.070 [2024-12-16 19:31:19.098950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:35.070 [2024-12-16 19:31:19.098959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.305 ms 00:29:35.070 [2024-12-16 19:31:19.098965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.070 [2024-12-16 19:31:19.098996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.070 [2024-12-16 19:31:19.099003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:35.070 [2024-12-16 19:31:19.099013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:35.070 [2024-12-16 19:31:19.099019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.070 [2024-12-16 19:31:19.099076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.070 [2024-12-16 19:31:19.099085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:35.070 [2024-12-16 19:31:19.099093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:29:35.070 [2024-12-16 19:31:19.099098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.070 [2024-12-16 19:31:19.099774] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3997.552 ms, result 0 00:29:35.070 { 00:29:35.070 "name": "ftl", 00:29:35.070 "uuid": "91ef306d-1de1-4576-9514-e2069d79fd9f" 00:29:35.070 } 00:29:35.070 19:31:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:35.070 [2024-12-16 19:31:19.307333] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.070 19:31:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:35.331 19:31:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:35.592 [2024-12-16 19:31:19.711661] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:35.592 19:31:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:35.592 [2024-12-16 19:31:19.899886] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:35.592 19:31:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:35.853 19:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:35.853 19:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:35.854 Fill FTL, iteration 1 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84570 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84570 /var/tmp/spdk.tgt.sock 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84570 ']' 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.854 19:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:36.114 [2024-12-16 19:31:20.280793] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:36.114 [2024-12-16 19:31:20.280905] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84570 ] 00:29:36.114 [2024-12-16 19:31:20.438471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.375 [2024-12-16 19:31:20.534779] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.946 19:31:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:36.946 19:31:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:36.946 19:31:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:37.207 ftln1 00:29:37.207 19:31:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:37.207 19:31:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:37.469 19:31:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:37.469 19:31:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84570 00:29:37.469 19:31:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84570 ']' 00:29:37.469 19:31:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84570 00:29:37.469 19:31:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:37.469 19:31:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:37.469 19:31:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84570 00:29:37.469 killing process with pid 84570 00:29:37.469 19:31:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:37.469 19:31:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:37.469 19:31:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84570' 00:29:37.469 19:31:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84570 00:29:37.469 19:31:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84570 00:29:38.852 19:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:38.852 19:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:38.852 [2024-12-16 19:31:23.114884] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:38.852 [2024-12-16 19:31:23.115000] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84617 ] 00:29:39.112 [2024-12-16 19:31:23.273812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.112 [2024-12-16 19:31:23.366853] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.498  [2024-12-16T19:31:25.793Z] Copying: 231/1024 [MB] (231 MBps) [2024-12-16T19:31:26.736Z] Copying: 485/1024 [MB] (254 MBps) [2024-12-16T19:31:28.121Z] Copying: 748/1024 [MB] (263 MBps) [2024-12-16T19:31:28.121Z] Copying: 1006/1024 [MB] (258 MBps) [2024-12-16T19:31:28.383Z] Copying: 1024/1024 [MB] (average 251 MBps) 00:29:44.029 00:29:44.029 Calculate MD5 checksum, iteration 1 00:29:44.029 19:31:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:44.029 19:31:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:44.029 19:31:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:44.029 19:31:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:44.029 19:31:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:44.029 19:31:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:44.029 19:31:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:44.029 19:31:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:44.290 [2024-12-16 19:31:28.449612] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:44.290 [2024-12-16 19:31:28.449735] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84673 ] 00:29:44.290 [2024-12-16 19:31:28.610191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.551 [2024-12-16 19:31:28.752508] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.939  [2024-12-16T19:31:31.234Z] Copying: 594/1024 [MB] (594 MBps) [2024-12-16T19:31:31.806Z] Copying: 1024/1024 [MB] (average 561 MBps) 00:29:47.452 00:29:47.452 19:31:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:47.452 19:31:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:50.038 19:31:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:50.038 Fill FTL, iteration 2 00:29:50.038 19:31:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=09f670e9dcec502ef11c64ec716fae23 00:29:50.038 19:31:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:50.038 19:31:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:50.038 19:31:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:50.038 19:31:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:50.038 19:31:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:50.038 19:31:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:50.038 19:31:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:50.038 19:31:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:50.038 19:31:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:50.038 [2024-12-16 19:31:33.988915] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:50.038 [2024-12-16 19:31:33.989040] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84734 ] 00:29:50.038 [2024-12-16 19:31:34.149585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.038 [2024-12-16 19:31:34.255255] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.451  [2024-12-16T19:31:36.750Z] Copying: 184/1024 [MB] (184 MBps) [2024-12-16T19:31:37.689Z] Copying: 350/1024 [MB] (166 MBps) [2024-12-16T19:31:38.632Z] Copying: 550/1024 [MB] (200 MBps) [2024-12-16T19:31:40.014Z] Copying: 725/1024 [MB] (175 MBps) [2024-12-16T19:31:40.272Z] Copying: 913/1024 [MB] (188 MBps) [2024-12-16T19:31:40.841Z] Copying: 1024/1024 [MB] (average 187 MBps) 00:29:56.487 00:29:56.487 Calculate MD5 checksum, iteration 2 00:29:56.487 19:31:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:29:56.487 19:31:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:29:56.487 19:31:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:56.487 19:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:56.487 19:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:56.487 19:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:56.487 19:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:56.487 19:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:56.487 [2024-12-16 19:31:40.796877] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:56.487 [2024-12-16 19:31:40.797008] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84804 ] 00:29:56.745 [2024-12-16 19:31:40.953218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.745 [2024-12-16 19:31:41.055275] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.649  [2024-12-16T19:31:43.265Z] Copying: 596/1024 [MB] (596 MBps) [2024-12-16T19:31:44.204Z] Copying: 1024/1024 [MB] (average 657 MBps) 00:29:59.850 00:29:59.850 19:31:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:29:59.850 19:31:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:01.223 19:31:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:01.223 19:31:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=1fe019f58bd9e6725aceffa6eea41770 00:30:01.223 19:31:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:01.224 19:31:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:01.224 19:31:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:01.483 [2024-12-16 19:31:45.726299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.483 [2024-12-16 19:31:45.726341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:01.483 [2024-12-16 19:31:45.726352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:01.483 [2024-12-16 19:31:45.726359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.483 [2024-12-16 19:31:45.726378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.483 [2024-12-16 19:31:45.726387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:01.483 [2024-12-16 19:31:45.726393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:01.483 [2024-12-16 19:31:45.726399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.483 [2024-12-16 19:31:45.726414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.483 [2024-12-16 19:31:45.726421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:01.483 [2024-12-16 19:31:45.726427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:01.483 [2024-12-16 19:31:45.726432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.483 [2024-12-16 19:31:45.726482] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.172 ms, result 0 00:30:01.483 true 00:30:01.483 19:31:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:01.744 { 00:30:01.744 "name": "ftl", 00:30:01.744 "properties": [ 00:30:01.744 { 00:30:01.744 "name": "superblock_version", 00:30:01.744 "value": 5, 00:30:01.744 "read-only": true 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "name": "base_device", 00:30:01.744 "bands": [ 00:30:01.744 { 00:30:01.744 "id": 0, 00:30:01.744 "state": "FREE", 00:30:01.744 "validity": 0.0 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "id": 1, 00:30:01.744 "state": "FREE", 00:30:01.744 "validity": 0.0 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "id": 2, 00:30:01.744 "state": "FREE", 00:30:01.744 "validity": 0.0 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "id": 3, 00:30:01.744 "state": "FREE", 00:30:01.744 "validity": 0.0 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "id": 4, 00:30:01.744 "state": "FREE", 00:30:01.744 "validity": 0.0 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "id": 5, 00:30:01.744 "state": "FREE", 00:30:01.744 "validity": 0.0 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "id": 6, 00:30:01.744 "state": "FREE", 00:30:01.744 "validity": 0.0 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "id": 7, 00:30:01.744 "state": "FREE", 00:30:01.744 "validity": 0.0 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "id": 8, 00:30:01.744 "state": "FREE", 00:30:01.744 "validity": 0.0 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "id": 9, 00:30:01.744 "state": "FREE", 00:30:01.744 "validity": 0.0 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "id": 10, 00:30:01.744 "state": "FREE", 00:30:01.744 "validity": 0.0 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "id": 11, 00:30:01.744 "state": "FREE", 00:30:01.744 "validity": 0.0 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "id": 12, 00:30:01.744 "state": "FREE", 00:30:01.744 "validity": 0.0 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "id": 13, 00:30:01.744 "state": "FREE", 00:30:01.744 "validity": 0.0 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "id": 14, 00:30:01.744 "state": "FREE", 00:30:01.744 "validity": 0.0 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "id": 15, 00:30:01.744 "state": "FREE", 00:30:01.744 "validity": 0.0 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "id": 16, 00:30:01.744 "state": "FREE", 00:30:01.744 "validity": 0.0 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "id": 17, 00:30:01.744 "state": "FREE", 00:30:01.744 "validity": 0.0 00:30:01.744 } 00:30:01.744 ], 00:30:01.744 "read-only": true 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "name": "cache_device", 00:30:01.744 "type": "bdev", 00:30:01.744 "chunks": [ 00:30:01.744 { 00:30:01.744 "id": 0, 00:30:01.744 "state": "INACTIVE", 00:30:01.744 "utilization": 0.0 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "id": 1, 00:30:01.744 "state": "CLOSED", 00:30:01.744 "utilization": 1.0 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "id": 2, 00:30:01.744 "state": "CLOSED", 00:30:01.744 "utilization": 1.0 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "id": 3, 00:30:01.744 "state": "OPEN", 00:30:01.744 "utilization": 0.001953125 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "id": 4, 00:30:01.744 "state": "OPEN", 00:30:01.744 "utilization": 0.0 00:30:01.744 } 00:30:01.744 ], 00:30:01.744 "read-only": true 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "name": "verbose_mode", 00:30:01.744 "value": true, 00:30:01.744 "unit": "", 00:30:01.744 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:01.744 }, 00:30:01.744 { 00:30:01.744 "name": "prep_upgrade_on_shutdown", 00:30:01.744 "value": false, 00:30:01.744 "unit": "", 00:30:01.744 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:01.744 } 00:30:01.744 ] 00:30:01.744 } 00:30:01.744 19:31:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:02.004 [2024-12-16 19:31:46.126662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:02.004 [2024-12-16 19:31:46.126838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:02.004 [2024-12-16 19:31:46.126852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:02.004 [2024-12-16 19:31:46.126858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:02.004 [2024-12-16 19:31:46.126881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:02.004 [2024-12-16 19:31:46.126889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:02.004 [2024-12-16 19:31:46.126895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:02.004 [2024-12-16 19:31:46.126901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:02.004 [2024-12-16 19:31:46.126916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:02.004 [2024-12-16 19:31:46.126922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:02.004 [2024-12-16 19:31:46.126928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:02.004 [2024-12-16 19:31:46.126934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:02.004 [2024-12-16 19:31:46.126983] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.311 ms, result 0 00:30:02.004 true 00:30:02.005 19:31:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:02.005 19:31:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:02.005 19:31:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:02.005 19:31:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:02.005 19:31:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:02.005 19:31:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:02.265 [2024-12-16 19:31:46.526958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:02.265 [2024-12-16 19:31:46.526989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:02.265 [2024-12-16 19:31:46.526998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:02.265 [2024-12-16 19:31:46.527004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:02.265 [2024-12-16 19:31:46.527020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:02.265 [2024-12-16 19:31:46.527026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:02.265 [2024-12-16 19:31:46.527032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:02.265 [2024-12-16 19:31:46.527037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:02.265 [2024-12-16 19:31:46.527052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:02.265 [2024-12-16 19:31:46.527058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:02.265 [2024-12-16 19:31:46.527064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:02.265 [2024-12-16 19:31:46.527069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:02.265 [2024-12-16 19:31:46.527109] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.143 ms, result 0 00:30:02.265 true 00:30:02.265 19:31:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:02.524 { 00:30:02.524 "name": "ftl", 00:30:02.524 "properties": [ 00:30:02.524 { 00:30:02.524 "name": "superblock_version", 00:30:02.524 "value": 5, 00:30:02.524 "read-only": true 00:30:02.524 }, 00:30:02.524 { 00:30:02.524 "name": "base_device", 00:30:02.524 "bands": [ 00:30:02.524 { 00:30:02.524 "id": 0, 00:30:02.524 "state": "FREE", 00:30:02.524 "validity": 0.0 00:30:02.524 }, 00:30:02.524 { 00:30:02.524 "id": 1, 00:30:02.524 "state": "FREE", 00:30:02.524 "validity": 0.0 00:30:02.524 }, 00:30:02.524 { 00:30:02.524 "id": 2, 00:30:02.524 "state": "FREE", 00:30:02.524 "validity": 0.0 00:30:02.524 }, 00:30:02.524 { 00:30:02.524 "id": 3, 00:30:02.524 "state": "FREE", 00:30:02.524 "validity": 0.0 00:30:02.524 }, 00:30:02.524 { 00:30:02.524 "id": 4, 00:30:02.524 "state": "FREE", 00:30:02.524 "validity": 0.0 00:30:02.524 }, 00:30:02.524 { 00:30:02.524 "id": 5, 00:30:02.524 "state": "FREE", 00:30:02.524 "validity": 0.0 00:30:02.524 }, 00:30:02.524 { 00:30:02.524 "id": 6, 00:30:02.524 "state": "FREE", 00:30:02.524 "validity": 0.0 00:30:02.524 }, 00:30:02.524 { 00:30:02.524 "id": 7, 00:30:02.524 "state": "FREE", 00:30:02.524 "validity": 0.0 00:30:02.524 }, 00:30:02.524 { 00:30:02.524 "id": 8, 00:30:02.524 "state": "FREE", 00:30:02.524 "validity": 0.0 00:30:02.524 }, 00:30:02.524 { 00:30:02.524 "id": 9, 00:30:02.524 "state": "FREE", 00:30:02.524 "validity": 0.0 00:30:02.524 }, 00:30:02.524 { 00:30:02.524 "id": 10, 00:30:02.524 "state": "FREE", 00:30:02.524 "validity": 0.0 00:30:02.524 }, 00:30:02.524 { 00:30:02.524 "id": 11, 00:30:02.524 "state": "FREE", 00:30:02.524 "validity": 0.0 00:30:02.524 }, 00:30:02.524 { 00:30:02.524 "id": 12, 00:30:02.524 "state": "FREE", 00:30:02.524 "validity": 0.0 00:30:02.524 }, 00:30:02.524 { 00:30:02.524 "id": 13, 00:30:02.524 "state": "FREE", 00:30:02.524 "validity": 0.0 00:30:02.524 }, 00:30:02.524 { 00:30:02.524 "id": 14, 00:30:02.524 "state": "FREE", 00:30:02.524 "validity": 0.0 00:30:02.524 }, 00:30:02.524 { 00:30:02.524 "id": 15, 00:30:02.525 "state": "FREE", 00:30:02.525 "validity": 0.0 00:30:02.525 }, 00:30:02.525 { 00:30:02.525 "id": 16, 00:30:02.525 "state": "FREE", 00:30:02.525 "validity": 0.0 00:30:02.525 }, 00:30:02.525 { 00:30:02.525 "id": 17, 00:30:02.525 "state": "FREE", 00:30:02.525 "validity": 0.0 00:30:02.525 } 00:30:02.525 ], 00:30:02.525 "read-only": true 00:30:02.525 }, 00:30:02.525 { 00:30:02.525 "name": "cache_device", 00:30:02.525 "type": "bdev", 00:30:02.525 "chunks": [ 00:30:02.525 { 00:30:02.525 "id": 0, 00:30:02.525 "state": "INACTIVE", 00:30:02.525 "utilization": 0.0 00:30:02.525 }, 00:30:02.525 { 00:30:02.525 "id": 1, 00:30:02.525 "state": "CLOSED", 00:30:02.525 "utilization": 1.0 00:30:02.525 }, 00:30:02.525 { 00:30:02.525 "id": 2, 00:30:02.525 "state": "CLOSED", 00:30:02.525 "utilization": 1.0 00:30:02.525 }, 00:30:02.525 { 00:30:02.525 "id": 3, 00:30:02.525 "state": "OPEN", 00:30:02.525 "utilization": 0.001953125 00:30:02.525 }, 00:30:02.525 { 00:30:02.525 "id": 4, 00:30:02.525 "state": "OPEN", 00:30:02.525 "utilization": 0.0 00:30:02.525 } 00:30:02.525 ], 00:30:02.525 "read-only": true 00:30:02.525 }, 00:30:02.525 { 00:30:02.525 "name": "verbose_mode", 00:30:02.525 "value": true, 00:30:02.525 "unit": "", 00:30:02.525 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:02.525 }, 00:30:02.525 { 00:30:02.525 "name": "prep_upgrade_on_shutdown", 00:30:02.525 "value": true, 00:30:02.525 "unit": "", 00:30:02.525 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:02.525 } 00:30:02.525 ] 00:30:02.525 } 00:30:02.525 19:31:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:02.525 19:31:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84444 ]] 00:30:02.525 19:31:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84444 00:30:02.525 19:31:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84444 ']' 00:30:02.525 19:31:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84444 00:30:02.525 19:31:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:02.525 19:31:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:02.525 19:31:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84444 00:30:02.525 killing process with pid 84444 00:30:02.525 19:31:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:02.525 19:31:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:02.525 19:31:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84444' 00:30:02.525 19:31:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84444 00:30:02.525 19:31:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84444 00:30:03.096 [2024-12-16 19:31:47.300005] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:03.096 [2024-12-16 19:31:47.310454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.096 [2024-12-16 19:31:47.310573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:03.096 [2024-12-16 19:31:47.310588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:03.096 [2024-12-16 19:31:47.310595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.096 [2024-12-16 19:31:47.310618] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:03.096 [2024-12-16 19:31:47.312740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.097 [2024-12-16 19:31:47.312763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:03.097 [2024-12-16 19:31:47.312772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.112 ms 00:30:03.097 [2024-12-16 19:31:47.312779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.236 [2024-12-16 19:31:54.889510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.236 [2024-12-16 19:31:54.889711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:11.236 [2024-12-16 19:31:54.889733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7576.684 ms 00:30:11.236 [2024-12-16 19:31:54.889740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.236 [2024-12-16 19:31:54.890710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.236 [2024-12-16 19:31:54.890785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:11.236 [2024-12-16 19:31:54.890833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.955 ms 00:30:11.236 [2024-12-16 19:31:54.890851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.236 [2024-12-16 19:31:54.891718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.236 [2024-12-16 19:31:54.891787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:11.236 [2024-12-16 19:31:54.891833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.837 ms 00:30:11.236 [2024-12-16 19:31:54.891853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.236 [2024-12-16 19:31:54.899314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.236 [2024-12-16 19:31:54.899408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:11.236 [2024-12-16 19:31:54.899419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.425 ms 00:30:11.236 [2024-12-16 19:31:54.899426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.236 [2024-12-16 19:31:54.904219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.236 [2024-12-16 19:31:54.904315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:11.236 [2024-12-16 19:31:54.904327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.769 ms 00:30:11.236 [2024-12-16 19:31:54.904334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.236 [2024-12-16 19:31:54.904388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.236 [2024-12-16 19:31:54.904400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:11.236 [2024-12-16 19:31:54.904406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:30:11.236 [2024-12-16 19:31:54.904412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.236 [2024-12-16 19:31:54.911591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.236 [2024-12-16 19:31:54.911683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:11.236 [2024-12-16 19:31:54.911694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.167 ms 00:30:11.236 [2024-12-16 19:31:54.911700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.236 [2024-12-16 19:31:54.918711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.236 [2024-12-16 19:31:54.918801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:11.236 [2024-12-16 19:31:54.918812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.988 ms 00:30:11.236 [2024-12-16 19:31:54.918818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.236 [2024-12-16 19:31:54.926048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.236 [2024-12-16 19:31:54.926133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:11.236 [2024-12-16 19:31:54.926143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.207 ms 00:30:11.236 [2024-12-16 19:31:54.926149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.236 [2024-12-16 19:31:54.932936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.236 [2024-12-16 19:31:54.933019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:11.237 [2024-12-16 19:31:54.933030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.728 ms 00:30:11.237 [2024-12-16 19:31:54.933035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.237 [2024-12-16 19:31:54.933056] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:11.237 [2024-12-16 19:31:54.933074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:11.237 [2024-12-16 19:31:54.933082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:11.237 [2024-12-16 19:31:54.933088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:11.237 [2024-12-16 19:31:54.933094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:11.237 [2024-12-16 19:31:54.933100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:11.237 [2024-12-16 19:31:54.933105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:11.237 [2024-12-16 19:31:54.933111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:11.237 [2024-12-16 19:31:54.933117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:11.237 [2024-12-16 19:31:54.933122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:11.237 [2024-12-16 19:31:54.933128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:11.237 [2024-12-16 19:31:54.933134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:11.237 [2024-12-16 19:31:54.933140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:11.237 [2024-12-16 19:31:54.933145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:11.237 [2024-12-16 19:31:54.933151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:11.237 [2024-12-16 19:31:54.933156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:11.237 [2024-12-16 19:31:54.933162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:11.237 [2024-12-16 19:31:54.933167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:11.237 [2024-12-16 19:31:54.933191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:11.237 [2024-12-16 19:31:54.933199] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:11.237 [2024-12-16 19:31:54.933205] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 91ef306d-1de1-4576-9514-e2069d79fd9f 00:30:11.237 [2024-12-16 19:31:54.933211] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:11.237 [2024-12-16 19:31:54.933217] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:11.237 [2024-12-16 19:31:54.933222] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:11.237 [2024-12-16 19:31:54.933228] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:11.237 [2024-12-16 19:31:54.933236] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:11.237 [2024-12-16 19:31:54.933242] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:11.237 [2024-12-16 19:31:54.933249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:11.237 [2024-12-16 19:31:54.933254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:11.237 [2024-12-16 19:31:54.933259] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:11.237 [2024-12-16 19:31:54.933264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.237 [2024-12-16 19:31:54.933271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:11.237 [2024-12-16 19:31:54.933277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.208 ms 00:30:11.237 [2024-12-16 19:31:54.933283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.237 [2024-12-16 19:31:54.943016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.237 [2024-12-16 19:31:54.943040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:11.237 [2024-12-16 19:31:54.943052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.720 ms 00:30:11.237 [2024-12-16 19:31:54.943058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.237 [2024-12-16 19:31:54.943343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.237 [2024-12-16 19:31:54.943356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:11.237 [2024-12-16 19:31:54.943363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.270 ms 00:30:11.237 [2024-12-16 19:31:54.943369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.237 [2024-12-16 19:31:54.976479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:11.237 [2024-12-16 19:31:54.976509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:11.237 [2024-12-16 19:31:54.976517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:11.237 [2024-12-16 19:31:54.976524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.237 [2024-12-16 19:31:54.976547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:11.237 [2024-12-16 19:31:54.976553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:11.237 [2024-12-16 19:31:54.976560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:11.237 [2024-12-16 19:31:54.976565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.237 [2024-12-16 19:31:54.976613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:11.237 [2024-12-16 19:31:54.976621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:11.237 [2024-12-16 19:31:54.976630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:11.237 [2024-12-16 19:31:54.976636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.237 [2024-12-16 19:31:54.976648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:11.237 [2024-12-16 19:31:54.976654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:11.237 [2024-12-16 19:31:54.976660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:11.237 [2024-12-16 19:31:54.976666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.237 [2024-12-16 19:31:55.036905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:11.237 [2024-12-16 19:31:55.036937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:11.237 [2024-12-16 19:31:55.036949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:11.237 [2024-12-16 19:31:55.036955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.237 [2024-12-16 19:31:55.086475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:11.237 [2024-12-16 19:31:55.086508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:11.237 [2024-12-16 19:31:55.086518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:11.237 [2024-12-16 19:31:55.086525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.237 [2024-12-16 19:31:55.086602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:11.237 [2024-12-16 19:31:55.086610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:11.237 [2024-12-16 19:31:55.086617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:11.237 [2024-12-16 19:31:55.086623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.237 [2024-12-16 19:31:55.086657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:11.237 [2024-12-16 19:31:55.086665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:11.237 [2024-12-16 19:31:55.086671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:11.237 [2024-12-16 19:31:55.086677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.237 [2024-12-16 19:31:55.086744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:11.237 [2024-12-16 19:31:55.086752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:11.237 [2024-12-16 19:31:55.086758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:11.237 [2024-12-16 19:31:55.086764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.237 [2024-12-16 19:31:55.086789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:11.237 [2024-12-16 19:31:55.086796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:11.237 [2024-12-16 19:31:55.086802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:11.237 [2024-12-16 19:31:55.086808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.237 [2024-12-16 19:31:55.086835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:11.237 [2024-12-16 19:31:55.086842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:11.237 [2024-12-16 19:31:55.086848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:11.237 [2024-12-16 19:31:55.086854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.237 [2024-12-16 19:31:55.086889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:11.237 [2024-12-16 19:31:55.086896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:11.237 [2024-12-16 19:31:55.086902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:11.237 [2024-12-16 19:31:55.086908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.237 [2024-12-16 19:31:55.087000] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7776.500 ms, result 0 00:30:14.541 19:31:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:14.541 19:31:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:14.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.541 19:31:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:14.541 19:31:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:14.541 19:31:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:14.541 19:31:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84987 00:30:14.541 19:31:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:14.541 19:31:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84987 00:30:14.541 19:31:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84987 ']' 00:30:14.541 19:31:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:14.541 19:31:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.541 19:31:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:14.541 19:31:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.541 19:31:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:14.541 19:31:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:14.541 [2024-12-16 19:31:58.867009] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:14.541 [2024-12-16 19:31:58.867328] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84987 ] 00:30:14.801 [2024-12-16 19:31:59.024579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.801 [2024-12-16 19:31:59.105277] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.373 [2024-12-16 19:31:59.678615] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:15.373 [2024-12-16 19:31:59.678829] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:15.635 [2024-12-16 19:31:59.821428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.635 [2024-12-16 19:31:59.821548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:15.635 [2024-12-16 19:31:59.821599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:15.635 [2024-12-16 19:31:59.821618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.635 [2024-12-16 19:31:59.821677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.635 [2024-12-16 19:31:59.821696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:15.635 [2024-12-16 19:31:59.821712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:30:15.635 [2024-12-16 19:31:59.821726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.635 [2024-12-16 19:31:59.821757] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:15.635 [2024-12-16 19:31:59.822296] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:15.635 [2024-12-16 19:31:59.822382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.635 [2024-12-16 19:31:59.822424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:15.635 [2024-12-16 19:31:59.822442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.632 ms 00:30:15.635 [2024-12-16 19:31:59.822457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.635 [2024-12-16 19:31:59.823493] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:15.635 [2024-12-16 19:31:59.833188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.635 [2024-12-16 19:31:59.833288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:15.635 [2024-12-16 19:31:59.833341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.697 ms 00:30:15.635 [2024-12-16 19:31:59.833358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.635 [2024-12-16 19:31:59.833407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.635 [2024-12-16 19:31:59.833456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:15.635 [2024-12-16 19:31:59.833475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:15.635 [2024-12-16 19:31:59.833490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.635 [2024-12-16 19:31:59.837897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.635 [2024-12-16 19:31:59.837988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:15.635 [2024-12-16 19:31:59.838032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.323 ms 00:30:15.635 [2024-12-16 19:31:59.838050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.635 [2024-12-16 19:31:59.838103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.635 [2024-12-16 19:31:59.838153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:15.635 [2024-12-16 19:31:59.838181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:30:15.635 [2024-12-16 19:31:59.838199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.635 [2024-12-16 19:31:59.838266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.635 [2024-12-16 19:31:59.838290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:15.635 [2024-12-16 19:31:59.838336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:15.635 [2024-12-16 19:31:59.838353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.635 [2024-12-16 19:31:59.838381] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:15.635 [2024-12-16 19:31:59.841090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.635 [2024-12-16 19:31:59.841185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:15.635 [2024-12-16 19:31:59.841229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.713 ms 00:30:15.635 [2024-12-16 19:31:59.841249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.635 [2024-12-16 19:31:59.841284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.635 [2024-12-16 19:31:59.841329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:15.635 [2024-12-16 19:31:59.841347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:15.635 [2024-12-16 19:31:59.841361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.635 [2024-12-16 19:31:59.841408] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:15.635 [2024-12-16 19:31:59.841436] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:15.635 [2024-12-16 19:31:59.841481] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:15.635 [2024-12-16 19:31:59.841579] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:15.635 [2024-12-16 19:31:59.841676] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:15.635 [2024-12-16 19:31:59.841729] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:15.635 [2024-12-16 19:31:59.841755] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:15.635 [2024-12-16 19:31:59.841797] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:15.635 [2024-12-16 19:31:59.841821] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:15.635 [2024-12-16 19:31:59.841901] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:15.635 [2024-12-16 19:31:59.841908] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:15.635 [2024-12-16 19:31:59.841914] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:15.635 [2024-12-16 19:31:59.841919] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:15.635 [2024-12-16 19:31:59.841926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.635 [2024-12-16 19:31:59.841932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:15.635 [2024-12-16 19:31:59.841938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.520 ms 00:30:15.635 [2024-12-16 19:31:59.841943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.635 [2024-12-16 19:31:59.842013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.635 [2024-12-16 19:31:59.842019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:15.635 [2024-12-16 19:31:59.842029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:15.635 [2024-12-16 19:31:59.842034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.635 [2024-12-16 19:31:59.842108] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:15.635 [2024-12-16 19:31:59.842116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:15.635 [2024-12-16 19:31:59.842122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:15.635 [2024-12-16 19:31:59.842128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:15.635 [2024-12-16 19:31:59.842133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:15.635 [2024-12-16 19:31:59.842139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:15.636 [2024-12-16 19:31:59.842144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:15.636 [2024-12-16 19:31:59.842149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:15.636 [2024-12-16 19:31:59.842154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:15.636 [2024-12-16 19:31:59.842159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:15.636 [2024-12-16 19:31:59.842164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:15.636 [2024-12-16 19:31:59.842169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:15.636 [2024-12-16 19:31:59.842189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:15.636 [2024-12-16 19:31:59.842195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:15.636 [2024-12-16 19:31:59.842200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:15.636 [2024-12-16 19:31:59.842205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:15.636 [2024-12-16 19:31:59.842210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:15.636 [2024-12-16 19:31:59.842215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:15.636 [2024-12-16 19:31:59.842220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:15.636 [2024-12-16 19:31:59.842225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:15.636 [2024-12-16 19:31:59.842230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:15.636 [2024-12-16 19:31:59.842235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:15.636 [2024-12-16 19:31:59.842240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:15.636 [2024-12-16 19:31:59.842251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:15.636 [2024-12-16 19:31:59.842256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:15.636 [2024-12-16 19:31:59.842261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:15.636 [2024-12-16 19:31:59.842266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:15.636 [2024-12-16 19:31:59.842271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:15.636 [2024-12-16 19:31:59.842276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:15.636 [2024-12-16 19:31:59.842281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:15.636 [2024-12-16 19:31:59.842286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:15.636 [2024-12-16 19:31:59.842291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:15.636 [2024-12-16 19:31:59.842296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:15.636 [2024-12-16 19:31:59.842301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:15.636 [2024-12-16 19:31:59.842306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:15.636 [2024-12-16 19:31:59.842311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:15.636 [2024-12-16 19:31:59.842316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:15.636 [2024-12-16 19:31:59.842322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:15.636 [2024-12-16 19:31:59.842327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:15.636 [2024-12-16 19:31:59.842332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:15.636 [2024-12-16 19:31:59.842337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:15.636 [2024-12-16 19:31:59.842342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:15.636 [2024-12-16 19:31:59.842347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:15.636 [2024-12-16 19:31:59.842352] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:15.636 [2024-12-16 19:31:59.842358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:15.636 [2024-12-16 19:31:59.842364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:15.636 [2024-12-16 19:31:59.842370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:15.636 [2024-12-16 19:31:59.842377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:15.636 [2024-12-16 19:31:59.842382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:15.636 [2024-12-16 19:31:59.842387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:15.636 [2024-12-16 19:31:59.842392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:15.636 [2024-12-16 19:31:59.842397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:15.636 [2024-12-16 19:31:59.842402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:15.636 [2024-12-16 19:31:59.842408] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:15.636 [2024-12-16 19:31:59.842415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:15.636 [2024-12-16 19:31:59.842422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:15.636 [2024-12-16 19:31:59.842427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:15.636 [2024-12-16 19:31:59.842432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:15.636 [2024-12-16 19:31:59.842438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:15.636 [2024-12-16 19:31:59.842443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:15.636 [2024-12-16 19:31:59.842449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:15.636 [2024-12-16 19:31:59.842454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:15.636 [2024-12-16 19:31:59.842459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:15.636 [2024-12-16 19:31:59.842465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:15.636 [2024-12-16 19:31:59.842470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:15.636 [2024-12-16 19:31:59.842475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:15.636 [2024-12-16 19:31:59.842480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:15.636 [2024-12-16 19:31:59.842486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:15.636 [2024-12-16 19:31:59.842491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:15.636 [2024-12-16 19:31:59.842496] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:15.636 [2024-12-16 19:31:59.842502] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:15.636 [2024-12-16 19:31:59.842508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:15.636 [2024-12-16 19:31:59.842514] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:15.636 [2024-12-16 19:31:59.842519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:15.636 [2024-12-16 19:31:59.842525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:15.636 [2024-12-16 19:31:59.842530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.636 [2024-12-16 19:31:59.842536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:15.636 [2024-12-16 19:31:59.842541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.475 ms 00:30:15.636 [2024-12-16 19:31:59.842546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.636 [2024-12-16 19:31:59.842595] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:15.636 [2024-12-16 19:31:59.842603] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:19.892 [2024-12-16 19:32:03.428907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.892 [2024-12-16 19:32:03.429243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:19.892 [2024-12-16 19:32:03.429274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3586.298 ms 00:30:19.892 [2024-12-16 19:32:03.429285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.892 [2024-12-16 19:32:03.460750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.892 [2024-12-16 19:32:03.460966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:19.892 [2024-12-16 19:32:03.460988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.184 ms 00:30:19.892 [2024-12-16 19:32:03.460998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.892 [2024-12-16 19:32:03.461156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.892 [2024-12-16 19:32:03.461198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:19.892 [2024-12-16 19:32:03.461210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:19.892 [2024-12-16 19:32:03.461219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.892 [2024-12-16 19:32:03.496561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.892 [2024-12-16 19:32:03.496610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:19.892 [2024-12-16 19:32:03.496623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.294 ms 00:30:19.892 [2024-12-16 19:32:03.496635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.892 [2024-12-16 19:32:03.496679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.892 [2024-12-16 19:32:03.496689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:19.892 [2024-12-16 19:32:03.496698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:19.892 [2024-12-16 19:32:03.496707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.892 [2024-12-16 19:32:03.497323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.892 [2024-12-16 19:32:03.497347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:19.892 [2024-12-16 19:32:03.497358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.561 ms 00:30:19.892 [2024-12-16 19:32:03.497367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.892 [2024-12-16 19:32:03.497419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.892 [2024-12-16 19:32:03.497428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:19.892 [2024-12-16 19:32:03.497437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:19.892 [2024-12-16 19:32:03.497445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.892 [2024-12-16 19:32:03.514882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.892 [2024-12-16 19:32:03.515077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:19.892 [2024-12-16 19:32:03.515098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.411 ms 00:30:19.892 [2024-12-16 19:32:03.515107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.892 [2024-12-16 19:32:03.539932] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:19.892 [2024-12-16 19:32:03.539995] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:19.892 [2024-12-16 19:32:03.540013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.892 [2024-12-16 19:32:03.540022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:19.892 [2024-12-16 19:32:03.540033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.751 ms 00:30:19.892 [2024-12-16 19:32:03.540041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.892 [2024-12-16 19:32:03.554819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.893 [2024-12-16 19:32:03.554875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:19.893 [2024-12-16 19:32:03.554887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.719 ms 00:30:19.893 [2024-12-16 19:32:03.554896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.893 [2024-12-16 19:32:03.567454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.893 [2024-12-16 19:32:03.567504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:19.893 [2024-12-16 19:32:03.567517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.499 ms 00:30:19.893 [2024-12-16 19:32:03.567524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.893 [2024-12-16 19:32:03.580214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.893 [2024-12-16 19:32:03.580261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:19.893 [2024-12-16 19:32:03.580272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.639 ms 00:30:19.893 [2024-12-16 19:32:03.580279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.893 [2024-12-16 19:32:03.580936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.893 [2024-12-16 19:32:03.580963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:19.893 [2024-12-16 19:32:03.580974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.538 ms 00:30:19.893 [2024-12-16 19:32:03.580981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.893 [2024-12-16 19:32:03.647054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.893 [2024-12-16 19:32:03.647371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:19.893 [2024-12-16 19:32:03.647397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 66.050 ms 00:30:19.893 [2024-12-16 19:32:03.647407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.893 [2024-12-16 19:32:03.658646] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:19.893 [2024-12-16 19:32:03.659847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.893 [2024-12-16 19:32:03.659892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:19.893 [2024-12-16 19:32:03.659905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.384 ms 00:30:19.893 [2024-12-16 19:32:03.659912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.893 [2024-12-16 19:32:03.660026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.893 [2024-12-16 19:32:03.660042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:19.893 [2024-12-16 19:32:03.660052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:19.893 [2024-12-16 19:32:03.660060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.893 [2024-12-16 19:32:03.660123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.893 [2024-12-16 19:32:03.660134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:19.893 [2024-12-16 19:32:03.660143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:19.893 [2024-12-16 19:32:03.660152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.893 [2024-12-16 19:32:03.660207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.893 [2024-12-16 19:32:03.660217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:19.893 [2024-12-16 19:32:03.660229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:30:19.893 [2024-12-16 19:32:03.660238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.893 [2024-12-16 19:32:03.660277] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:19.893 [2024-12-16 19:32:03.660289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.893 [2024-12-16 19:32:03.660298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:19.893 [2024-12-16 19:32:03.660307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:30:19.893 [2024-12-16 19:32:03.660315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.893 [2024-12-16 19:32:03.685706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.893 [2024-12-16 19:32:03.685899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:19.893 [2024-12-16 19:32:03.685920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.365 ms 00:30:19.893 [2024-12-16 19:32:03.685929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.893 [2024-12-16 19:32:03.686009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.893 [2024-12-16 19:32:03.686020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:19.893 [2024-12-16 19:32:03.686029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:30:19.893 [2024-12-16 19:32:03.686037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.893 [2024-12-16 19:32:03.687492] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3865.522 ms, result 0 00:30:19.893 [2024-12-16 19:32:03.702289] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.893 [2024-12-16 19:32:03.718287] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:19.893 [2024-12-16 19:32:03.726463] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:19.893 19:32:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:19.893 19:32:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:19.893 19:32:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:19.893 19:32:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:19.893 19:32:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:19.893 [2024-12-16 19:32:03.966498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.893 [2024-12-16 19:32:03.966583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:19.893 [2024-12-16 19:32:03.966604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:19.893 [2024-12-16 19:32:03.966612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.893 [2024-12-16 19:32:03.966639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.893 [2024-12-16 19:32:03.966648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:19.893 [2024-12-16 19:32:03.966658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:19.893 [2024-12-16 19:32:03.966665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.893 [2024-12-16 19:32:03.966686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.893 [2024-12-16 19:32:03.966695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:19.893 [2024-12-16 19:32:03.966705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:19.893 [2024-12-16 19:32:03.966713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.893 [2024-12-16 19:32:03.966794] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.272 ms, result 0 00:30:19.893 true 00:30:19.893 19:32:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:19.893 { 00:30:19.893 "name": "ftl", 00:30:19.893 "properties": [ 00:30:19.893 { 00:30:19.893 "name": "superblock_version", 00:30:19.893 "value": 5, 00:30:19.893 "read-only": true 00:30:19.893 }, 00:30:19.893 { 00:30:19.893 "name": "base_device", 00:30:19.893 "bands": [ 00:30:19.893 { 00:30:19.893 "id": 0, 00:30:19.893 "state": "CLOSED", 00:30:19.893 "validity": 1.0 00:30:19.893 }, 00:30:19.893 { 00:30:19.893 "id": 1, 00:30:19.893 "state": "CLOSED", 00:30:19.893 "validity": 1.0 00:30:19.893 }, 00:30:19.893 { 00:30:19.893 "id": 2, 00:30:19.893 "state": "CLOSED", 00:30:19.893 "validity": 0.007843137254901933 00:30:19.893 }, 00:30:19.893 { 00:30:19.893 "id": 3, 00:30:19.893 "state": "FREE", 00:30:19.893 "validity": 0.0 00:30:19.893 }, 00:30:19.893 { 00:30:19.893 "id": 4, 00:30:19.893 "state": "FREE", 00:30:19.893 "validity": 0.0 00:30:19.893 }, 00:30:19.893 { 00:30:19.893 "id": 5, 00:30:19.893 "state": "FREE", 00:30:19.893 "validity": 0.0 00:30:19.893 }, 00:30:19.893 { 00:30:19.893 "id": 6, 00:30:19.893 "state": "FREE", 00:30:19.893 "validity": 0.0 00:30:19.893 }, 00:30:19.893 { 00:30:19.893 "id": 7, 00:30:19.893 "state": "FREE", 00:30:19.893 "validity": 0.0 00:30:19.893 }, 00:30:19.893 { 00:30:19.893 "id": 8, 00:30:19.893 "state": "FREE", 00:30:19.893 "validity": 0.0 00:30:19.893 }, 00:30:19.893 { 00:30:19.893 "id": 9, 00:30:19.893 "state": "FREE", 00:30:19.893 "validity": 0.0 00:30:19.893 }, 00:30:19.893 { 00:30:19.893 "id": 10, 00:30:19.893 "state": "FREE", 00:30:19.893 "validity": 0.0 00:30:19.893 }, 00:30:19.893 { 00:30:19.893 "id": 11, 00:30:19.893 "state": "FREE", 00:30:19.893 "validity": 0.0 00:30:19.893 }, 00:30:19.893 { 00:30:19.893 "id": 12, 00:30:19.893 "state": "FREE", 00:30:19.893 "validity": 0.0 00:30:19.893 }, 00:30:19.893 { 00:30:19.893 "id": 13, 00:30:19.893 "state": "FREE", 00:30:19.893 "validity": 0.0 00:30:19.893 }, 00:30:19.893 { 00:30:19.893 "id": 14, 00:30:19.893 "state": "FREE", 00:30:19.893 "validity": 0.0 00:30:19.893 }, 00:30:19.893 { 00:30:19.893 "id": 15, 00:30:19.893 "state": "FREE", 00:30:19.893 "validity": 0.0 00:30:19.893 }, 00:30:19.893 { 00:30:19.893 "id": 16, 00:30:19.893 "state": "FREE", 00:30:19.893 "validity": 0.0 00:30:19.893 }, 00:30:19.893 { 00:30:19.893 "id": 17, 00:30:19.893 "state": "FREE", 00:30:19.893 "validity": 0.0 00:30:19.893 } 00:30:19.893 ], 00:30:19.893 "read-only": true 00:30:19.893 }, 00:30:19.894 { 00:30:19.894 "name": "cache_device", 00:30:19.894 "type": "bdev", 00:30:19.894 "chunks": [ 00:30:19.894 { 00:30:19.894 "id": 0, 00:30:19.894 "state": "INACTIVE", 00:30:19.894 "utilization": 0.0 00:30:19.894 }, 00:30:19.894 { 00:30:19.894 "id": 1, 00:30:19.894 "state": "OPEN", 00:30:19.894 "utilization": 0.0 00:30:19.894 }, 00:30:19.894 { 00:30:19.894 "id": 2, 00:30:19.894 "state": "OPEN", 00:30:19.894 "utilization": 0.0 00:30:19.894 }, 00:30:19.894 { 00:30:19.894 "id": 3, 00:30:19.894 "state": "FREE", 00:30:19.894 "utilization": 0.0 00:30:19.894 }, 00:30:19.894 { 00:30:19.894 "id": 4, 00:30:19.894 "state": "FREE", 00:30:19.894 "utilization": 0.0 00:30:19.894 } 00:30:19.894 ], 00:30:19.894 "read-only": true 00:30:19.894 }, 00:30:19.894 { 00:30:19.894 "name": "verbose_mode", 00:30:19.894 "value": true, 00:30:19.894 "unit": "", 00:30:19.894 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:19.894 }, 00:30:19.894 { 00:30:19.894 "name": "prep_upgrade_on_shutdown", 00:30:19.894 "value": false, 00:30:19.894 "unit": "", 00:30:19.894 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:19.894 } 00:30:19.894 ] 00:30:19.894 } 00:30:19.894 19:32:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:19.894 19:32:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:19.894 19:32:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:20.204 19:32:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:20.205 19:32:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:20.205 19:32:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:20.205 19:32:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:20.205 19:32:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:20.466 Validate MD5 checksum, iteration 1 00:30:20.466 19:32:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:20.466 19:32:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:20.466 19:32:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:20.466 19:32:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:20.466 19:32:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:20.466 19:32:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:20.466 19:32:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:20.466 19:32:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:20.466 19:32:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:20.466 19:32:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:20.466 19:32:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:20.466 19:32:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:20.466 19:32:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:20.466 [2024-12-16 19:32:04.672239] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:20.466 [2024-12-16 19:32:04.672491] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85067 ] 00:30:20.727 [2024-12-16 19:32:04.832787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.727 [2024-12-16 19:32:04.925342] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.111  [2024-12-16T19:32:07.406Z] Copying: 552/1024 [MB] (552 MBps) [2024-12-16T19:32:08.786Z] Copying: 1024/1024 [MB] (average 523 MBps) 00:30:24.432 00:30:24.432 19:32:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:24.432 19:32:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:26.332 19:32:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:26.332 19:32:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=09f670e9dcec502ef11c64ec716fae23 00:30:26.332 19:32:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 09f670e9dcec502ef11c64ec716fae23 != \0\9\f\6\7\0\e\9\d\c\e\c\5\0\2\e\f\1\1\c\6\4\e\c\7\1\6\f\a\e\2\3 ]] 00:30:26.332 19:32:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:26.332 19:32:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:26.332 19:32:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:26.332 Validate MD5 checksum, iteration 2 00:30:26.332 19:32:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:26.332 19:32:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:26.332 19:32:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:26.332 19:32:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:26.332 19:32:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:26.332 19:32:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:26.591 [2024-12-16 19:32:10.692355] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:26.591 [2024-12-16 19:32:10.692601] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85134 ] 00:30:26.591 [2024-12-16 19:32:10.852625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.849 [2024-12-16 19:32:10.946226] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.232  [2024-12-16T19:32:13.529Z] Copying: 510/1024 [MB] (510 MBps) [2024-12-16T19:32:15.437Z] Copying: 1024/1024 [MB] (average 555 MBps) 00:30:31.083 00:30:31.083 19:32:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:31.083 19:32:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:32.984 19:32:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:32.984 19:32:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1fe019f58bd9e6725aceffa6eea41770 00:30:32.984 19:32:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1fe019f58bd9e6725aceffa6eea41770 != \1\f\e\0\1\9\f\5\8\b\d\9\e\6\7\2\5\a\c\e\f\f\a\6\e\e\a\4\1\7\7\0 ]] 00:30:32.984 19:32:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:32.984 19:32:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:32.984 19:32:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:32.984 19:32:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84987 ]] 00:30:32.984 19:32:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84987 00:30:32.984 19:32:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:32.984 19:32:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:32.984 19:32:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:32.984 19:32:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:32.984 19:32:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:32.984 19:32:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85201 00:30:32.984 19:32:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:32.984 19:32:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:32.984 19:32:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85201 00:30:32.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.984 19:32:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85201 ']' 00:30:32.984 19:32:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.984 19:32:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:32.984 19:32:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.984 19:32:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:32.984 19:32:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:32.984 [2024-12-16 19:32:16.916081] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:32.984 [2024-12-16 19:32:16.916329] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85201 ] 00:30:32.984 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84987 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:32.984 [2024-12-16 19:32:17.065522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.984 [2024-12-16 19:32:17.159119] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:33.550 [2024-12-16 19:32:17.790510] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:33.550 [2024-12-16 19:32:17.790579] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:33.810 [2024-12-16 19:32:17.939202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.810 [2024-12-16 19:32:17.939236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:33.810 [2024-12-16 19:32:17.939248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:33.810 [2024-12-16 19:32:17.939255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.810 [2024-12-16 19:32:17.939302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.810 [2024-12-16 19:32:17.939311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:33.810 [2024-12-16 19:32:17.939317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:30:33.810 [2024-12-16 19:32:17.939323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.810 [2024-12-16 19:32:17.939342] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:33.810 [2024-12-16 19:32:17.939869] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:33.810 [2024-12-16 19:32:17.939882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.810 [2024-12-16 19:32:17.939889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:33.810 [2024-12-16 19:32:17.939896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.547 ms 00:30:33.810 [2024-12-16 19:32:17.939902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.810 [2024-12-16 19:32:17.940118] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:33.810 [2024-12-16 19:32:17.953741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.810 [2024-12-16 19:32:17.953772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:33.810 [2024-12-16 19:32:17.953782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.623 ms 00:30:33.810 [2024-12-16 19:32:17.953789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.810 [2024-12-16 19:32:17.961027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.810 [2024-12-16 19:32:17.961130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:33.810 [2024-12-16 19:32:17.961191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:30:33.810 [2024-12-16 19:32:17.961210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.810 [2024-12-16 19:32:17.961480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.810 [2024-12-16 19:32:17.961504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:33.810 [2024-12-16 19:32:17.961520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.195 ms 00:30:33.810 [2024-12-16 19:32:17.961535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.810 [2024-12-16 19:32:17.961587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.810 [2024-12-16 19:32:17.961607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:33.810 [2024-12-16 19:32:17.961623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:30:33.810 [2024-12-16 19:32:17.962045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.810 [2024-12-16 19:32:17.962107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.810 [2024-12-16 19:32:17.962129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:33.810 [2024-12-16 19:32:17.962146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:33.810 [2024-12-16 19:32:17.962161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.810 [2024-12-16 19:32:17.962203] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:33.810 [2024-12-16 19:32:17.964554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.810 [2024-12-16 19:32:17.964653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:33.810 [2024-12-16 19:32:17.964702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.356 ms 00:30:33.810 [2024-12-16 19:32:17.964720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.810 [2024-12-16 19:32:17.964756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.810 [2024-12-16 19:32:17.964772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:33.810 [2024-12-16 19:32:17.964787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:33.810 [2024-12-16 19:32:17.964803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.810 [2024-12-16 19:32:17.964829] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:33.810 [2024-12-16 19:32:17.964857] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:33.810 [2024-12-16 19:32:17.964975] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:33.810 [2024-12-16 19:32:17.965012] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:33.810 [2024-12-16 19:32:17.965153] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:33.811 [2024-12-16 19:32:17.965199] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:33.811 [2024-12-16 19:32:17.965265] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:33.811 [2024-12-16 19:32:17.965294] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:33.811 [2024-12-16 19:32:17.965352] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:33.811 [2024-12-16 19:32:17.965378] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:33.811 [2024-12-16 19:32:17.965393] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:33.811 [2024-12-16 19:32:17.965408] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:33.811 [2024-12-16 19:32:17.965424] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:33.811 [2024-12-16 19:32:17.965468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.811 [2024-12-16 19:32:17.965489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:33.811 [2024-12-16 19:32:17.965505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.640 ms 00:30:33.811 [2024-12-16 19:32:17.965520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.811 [2024-12-16 19:32:17.965598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.811 [2024-12-16 19:32:17.965615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:33.811 [2024-12-16 19:32:17.965631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:33.811 [2024-12-16 19:32:17.965761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.811 [2024-12-16 19:32:17.965863] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:33.811 [2024-12-16 19:32:17.965884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:33.811 [2024-12-16 19:32:17.965903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:33.811 [2024-12-16 19:32:17.965967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.811 [2024-12-16 19:32:17.965985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:33.811 [2024-12-16 19:32:17.966000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:33.811 [2024-12-16 19:32:17.966015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:33.811 [2024-12-16 19:32:17.966029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:33.811 [2024-12-16 19:32:17.966064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:33.811 [2024-12-16 19:32:17.966081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.811 [2024-12-16 19:32:17.966096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:33.811 [2024-12-16 19:32:17.966110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:33.811 [2024-12-16 19:32:17.966125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.811 [2024-12-16 19:32:17.966166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:33.811 [2024-12-16 19:32:17.966194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:33.811 [2024-12-16 19:32:17.966209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.811 [2024-12-16 19:32:17.966223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:33.811 [2024-12-16 19:32:17.966238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:33.811 [2024-12-16 19:32:17.966252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.811 [2024-12-16 19:32:17.966290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:33.811 [2024-12-16 19:32:17.966307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:33.811 [2024-12-16 19:32:17.966329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:33.811 [2024-12-16 19:32:17.966344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:33.811 [2024-12-16 19:32:17.966358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:33.811 [2024-12-16 19:32:17.966404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:33.811 [2024-12-16 19:32:17.966420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:33.811 [2024-12-16 19:32:17.966435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:33.811 [2024-12-16 19:32:17.966450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:33.811 [2024-12-16 19:32:17.966463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:33.811 [2024-12-16 19:32:17.966478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:33.811 [2024-12-16 19:32:17.966510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:33.811 [2024-12-16 19:32:17.966612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:33.811 [2024-12-16 19:32:17.966629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:33.811 [2024-12-16 19:32:17.966661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.811 [2024-12-16 19:32:17.966678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:33.811 [2024-12-16 19:32:17.966692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:33.811 [2024-12-16 19:32:17.966707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.811 [2024-12-16 19:32:17.966721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:33.811 [2024-12-16 19:32:17.966735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:33.811 [2024-12-16 19:32:17.966750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.811 [2024-12-16 19:32:17.966793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:33.811 [2024-12-16 19:32:17.966810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:33.811 [2024-12-16 19:32:17.966824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.811 [2024-12-16 19:32:17.966839] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:33.811 [2024-12-16 19:32:17.966854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:33.811 [2024-12-16 19:32:17.966869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:33.811 [2024-12-16 19:32:17.966885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.811 [2024-12-16 19:32:17.966901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:33.811 [2024-12-16 19:32:17.966944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:33.811 [2024-12-16 19:32:17.966960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:33.811 [2024-12-16 19:32:17.966976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:33.811 [2024-12-16 19:32:17.966991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:33.811 [2024-12-16 19:32:17.967005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:33.811 [2024-12-16 19:32:17.967021] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:33.811 [2024-12-16 19:32:17.967045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:33.811 [2024-12-16 19:32:17.967097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:33.811 [2024-12-16 19:32:17.967122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:33.811 [2024-12-16 19:32:17.967144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:33.811 [2024-12-16 19:32:17.967167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:33.811 [2024-12-16 19:32:17.967202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:33.811 [2024-12-16 19:32:17.967224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:33.811 [2024-12-16 19:32:17.967267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:33.811 [2024-12-16 19:32:17.967394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:33.811 [2024-12-16 19:32:17.967417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:33.811 [2024-12-16 19:32:17.967440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:33.811 [2024-12-16 19:32:17.967462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:33.811 [2024-12-16 19:32:17.967470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:33.811 [2024-12-16 19:32:17.967476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:33.811 [2024-12-16 19:32:17.967482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:33.811 [2024-12-16 19:32:17.967489] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:33.811 [2024-12-16 19:32:17.967496] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:33.811 [2024-12-16 19:32:17.967505] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:33.811 [2024-12-16 19:32:17.967511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:33.811 [2024-12-16 19:32:17.967517] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:33.811 [2024-12-16 19:32:17.967522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:33.811 [2024-12-16 19:32:17.967528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.811 [2024-12-16 19:32:17.967534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:33.811 [2024-12-16 19:32:17.967542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.717 ms 00:30:33.812 [2024-12-16 19:32:17.967548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.812 [2024-12-16 19:32:17.989032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.812 [2024-12-16 19:32:17.989121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:33.812 [2024-12-16 19:32:17.989159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.440 ms 00:30:33.812 [2024-12-16 19:32:17.989189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.812 [2024-12-16 19:32:17.989228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.812 [2024-12-16 19:32:17.989244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:33.812 [2024-12-16 19:32:17.989261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:33.812 [2024-12-16 19:32:17.989277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.812 [2024-12-16 19:32:18.015757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.812 [2024-12-16 19:32:18.015848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:33.812 [2024-12-16 19:32:18.015887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.428 ms 00:30:33.812 [2024-12-16 19:32:18.015905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.812 [2024-12-16 19:32:18.015941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.812 [2024-12-16 19:32:18.015958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:33.812 [2024-12-16 19:32:18.015973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:33.812 [2024-12-16 19:32:18.015991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.812 [2024-12-16 19:32:18.016074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.812 [2024-12-16 19:32:18.016096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:33.812 [2024-12-16 19:32:18.016113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:30:33.812 [2024-12-16 19:32:18.016161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.812 [2024-12-16 19:32:18.016225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.812 [2024-12-16 19:32:18.016293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:33.812 [2024-12-16 19:32:18.016335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:30:33.812 [2024-12-16 19:32:18.016358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.812 [2024-12-16 19:32:18.029572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.812 [2024-12-16 19:32:18.029658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:33.812 [2024-12-16 19:32:18.029695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.185 ms 00:30:33.812 [2024-12-16 19:32:18.029712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.812 [2024-12-16 19:32:18.029810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.812 [2024-12-16 19:32:18.029832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:33.812 [2024-12-16 19:32:18.029848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:33.812 [2024-12-16 19:32:18.029863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.812 [2024-12-16 19:32:18.055519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.812 [2024-12-16 19:32:18.055650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:33.812 [2024-12-16 19:32:18.055708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.632 ms 00:30:33.812 [2024-12-16 19:32:18.055733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.812 [2024-12-16 19:32:18.063832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.812 [2024-12-16 19:32:18.063916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:33.812 [2024-12-16 19:32:18.063966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.387 ms 00:30:33.812 [2024-12-16 19:32:18.063983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.812 [2024-12-16 19:32:18.111528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.812 [2024-12-16 19:32:18.111650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:33.812 [2024-12-16 19:32:18.111692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.492 ms 00:30:33.812 [2024-12-16 19:32:18.111710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.812 [2024-12-16 19:32:18.111842] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:33.812 [2024-12-16 19:32:18.111971] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:33.812 [2024-12-16 19:32:18.112185] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:33.812 [2024-12-16 19:32:18.112304] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:33.812 [2024-12-16 19:32:18.112390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.812 [2024-12-16 19:32:18.112406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:33.812 [2024-12-16 19:32:18.112449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.643 ms 00:30:33.812 [2024-12-16 19:32:18.112469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.812 [2024-12-16 19:32:18.112526] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:33.812 [2024-12-16 19:32:18.112556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.812 [2024-12-16 19:32:18.112576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:33.812 [2024-12-16 19:32:18.112627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:30:33.812 [2024-12-16 19:32:18.112644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.812 [2024-12-16 19:32:18.125361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.812 [2024-12-16 19:32:18.125461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:33.812 [2024-12-16 19:32:18.125499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.690 ms 00:30:33.812 [2024-12-16 19:32:18.125517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.812 [2024-12-16 19:32:18.131999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.812 [2024-12-16 19:32:18.132076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:33.812 [2024-12-16 19:32:18.132115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:33.812 [2024-12-16 19:32:18.132133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.812 [2024-12-16 19:32:18.132221] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:33.812 [2024-12-16 19:32:18.132394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.812 [2024-12-16 19:32:18.132490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:33.812 [2024-12-16 19:32:18.132510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.174 ms 00:30:33.812 [2024-12-16 19:32:18.132526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.380 [2024-12-16 19:32:18.720228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.380 [2024-12-16 19:32:18.720485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:34.380 [2024-12-16 19:32:18.720510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 587.088 ms 00:30:34.380 [2024-12-16 19:32:18.720519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.380 [2024-12-16 19:32:18.724863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.380 [2024-12-16 19:32:18.724904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:34.380 [2024-12-16 19:32:18.724915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.351 ms 00:30:34.380 [2024-12-16 19:32:18.724923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.380 [2024-12-16 19:32:18.725990] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:34.380 [2024-12-16 19:32:18.726029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.380 [2024-12-16 19:32:18.726039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:34.380 [2024-12-16 19:32:18.726047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.071 ms 00:30:34.380 [2024-12-16 19:32:18.726055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.380 [2024-12-16 19:32:18.726088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.380 [2024-12-16 19:32:18.726097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:34.380 [2024-12-16 19:32:18.726107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:34.380 [2024-12-16 19:32:18.726120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.380 [2024-12-16 19:32:18.726155] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 593.930 ms, result 0 00:30:34.380 [2024-12-16 19:32:18.726215] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:34.380 [2024-12-16 19:32:18.726419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.380 [2024-12-16 19:32:18.726432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:34.380 [2024-12-16 19:32:18.726440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.205 ms 00:30:34.380 [2024-12-16 19:32:18.726448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.324 [2024-12-16 19:32:19.389792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.324 [2024-12-16 19:32:19.389876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:35.324 [2024-12-16 19:32:19.389912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 662.286 ms 00:30:35.324 [2024-12-16 19:32:19.389923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.324 [2024-12-16 19:32:19.395008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.324 [2024-12-16 19:32:19.395066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:35.324 [2024-12-16 19:32:19.395079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.757 ms 00:30:35.324 [2024-12-16 19:32:19.395088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.324 [2024-12-16 19:32:19.396157] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:35.324 [2024-12-16 19:32:19.396231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.324 [2024-12-16 19:32:19.396243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:35.324 [2024-12-16 19:32:19.396254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.107 ms 00:30:35.324 [2024-12-16 19:32:19.396264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.324 [2024-12-16 19:32:19.396309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.324 [2024-12-16 19:32:19.396322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:35.324 [2024-12-16 19:32:19.396332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:35.324 [2024-12-16 19:32:19.396340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.324 [2024-12-16 19:32:19.396382] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 670.161 ms, result 0 00:30:35.324 [2024-12-16 19:32:19.396437] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:35.324 [2024-12-16 19:32:19.396451] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:35.324 [2024-12-16 19:32:19.396463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.324 [2024-12-16 19:32:19.396473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:35.324 [2024-12-16 19:32:19.396483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1264.260 ms 00:30:35.324 [2024-12-16 19:32:19.396492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.324 [2024-12-16 19:32:19.396526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.324 [2024-12-16 19:32:19.396540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:35.324 [2024-12-16 19:32:19.396548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:35.324 [2024-12-16 19:32:19.396556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.324 [2024-12-16 19:32:19.410593] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:35.324 [2024-12-16 19:32:19.410986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.324 [2024-12-16 19:32:19.411004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:35.324 [2024-12-16 19:32:19.411017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.410 ms 00:30:35.324 [2024-12-16 19:32:19.411028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.324 [2024-12-16 19:32:19.411811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.324 [2024-12-16 19:32:19.411837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:35.324 [2024-12-16 19:32:19.411853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.689 ms 00:30:35.324 [2024-12-16 19:32:19.411862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.324 [2024-12-16 19:32:19.414095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.324 [2024-12-16 19:32:19.414295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:35.324 [2024-12-16 19:32:19.414314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.215 ms 00:30:35.324 [2024-12-16 19:32:19.414324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.324 [2024-12-16 19:32:19.414378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.324 [2024-12-16 19:32:19.414390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:35.324 [2024-12-16 19:32:19.414399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:35.324 [2024-12-16 19:32:19.414414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.324 [2024-12-16 19:32:19.414534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.324 [2024-12-16 19:32:19.414547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:35.324 [2024-12-16 19:32:19.414584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:30:35.324 [2024-12-16 19:32:19.414593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.324 [2024-12-16 19:32:19.414618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.324 [2024-12-16 19:32:19.414629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:35.324 [2024-12-16 19:32:19.414638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:35.324 [2024-12-16 19:32:19.414646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.324 [2024-12-16 19:32:19.414693] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:35.324 [2024-12-16 19:32:19.414706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.324 [2024-12-16 19:32:19.414715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:35.324 [2024-12-16 19:32:19.414723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:35.324 [2024-12-16 19:32:19.414732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.324 [2024-12-16 19:32:19.414790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.324 [2024-12-16 19:32:19.414801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:35.324 [2024-12-16 19:32:19.414810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:30:35.324 [2024-12-16 19:32:19.414820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.324 [2024-12-16 19:32:19.416433] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1476.594 ms, result 0 00:30:35.324 [2024-12-16 19:32:19.431635] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.325 [2024-12-16 19:32:19.447642] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:35.325 [2024-12-16 19:32:19.457462] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:35.325 Validate MD5 checksum, iteration 1 00:30:35.325 19:32:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:35.325 19:32:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:35.325 19:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:35.325 19:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:35.325 19:32:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:35.325 19:32:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:35.325 19:32:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:35.325 19:32:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:35.325 19:32:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:35.325 19:32:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:35.325 19:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:35.325 19:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:35.325 19:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:35.325 19:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:35.325 19:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:35.325 [2024-12-16 19:32:19.575151] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:35.325 [2024-12-16 19:32:19.575546] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85230 ] 00:30:35.586 [2024-12-16 19:32:19.739435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.586 [2024-12-16 19:32:19.857561] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:37.500  [2024-12-16T19:32:22.113Z] Copying: 545/1024 [MB] (545 MBps) [2024-12-16T19:32:26.297Z] Copying: 1024/1024 [MB] (average 598 MBps) 00:30:41.943 00:30:41.943 19:32:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:41.943 19:32:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:43.844 19:32:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:43.844 Validate MD5 checksum, iteration 2 00:30:43.844 19:32:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=09f670e9dcec502ef11c64ec716fae23 00:30:43.844 19:32:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 09f670e9dcec502ef11c64ec716fae23 != \0\9\f\6\7\0\e\9\d\c\e\c\5\0\2\e\f\1\1\c\6\4\e\c\7\1\6\f\a\e\2\3 ]] 00:30:43.844 19:32:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:43.844 19:32:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:43.844 19:32:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:43.844 19:32:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:43.844 19:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:43.844 19:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:43.844 19:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:43.844 19:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:43.844 19:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:43.844 [2024-12-16 19:32:27.751869] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:43.844 [2024-12-16 19:32:27.751984] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85319 ] 00:30:43.844 [2024-12-16 19:32:27.904914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.844 [2024-12-16 19:32:27.978921] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.241  [2024-12-16T19:32:30.215Z] Copying: 628/1024 [MB] (628 MBps) [2024-12-16T19:32:32.120Z] Copying: 1024/1024 [MB] (average 632 MBps) 00:30:47.766 00:30:48.024 19:32:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:48.024 19:32:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:49.924 19:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:49.924 19:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1fe019f58bd9e6725aceffa6eea41770 00:30:49.924 19:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1fe019f58bd9e6725aceffa6eea41770 != \1\f\e\0\1\9\f\5\8\b\d\9\e\6\7\2\5\a\c\e\f\f\a\6\e\e\a\4\1\7\7\0 ]] 00:30:49.924 19:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:49.924 19:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:49.924 19:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:49.924 19:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:49.924 19:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:49.924 19:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:50.183 19:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:50.183 19:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:50.183 19:32:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:50.183 19:32:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:50.183 19:32:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85201 ]] 00:30:50.183 19:32:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85201 00:30:50.183 19:32:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 85201 ']' 00:30:50.183 19:32:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 85201 00:30:50.183 19:32:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:50.183 19:32:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:50.183 19:32:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85201 00:30:50.183 killing process with pid 85201 00:30:50.183 19:32:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:50.183 19:32:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:50.183 19:32:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85201' 00:30:50.183 19:32:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 85201 00:30:50.183 19:32:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 85201 00:30:50.751 [2024-12-16 19:32:34.971668] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:50.751 [2024-12-16 19:32:34.984527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.751 [2024-12-16 19:32:34.984564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:50.751 [2024-12-16 19:32:34.984575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:50.751 [2024-12-16 19:32:34.984582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.751 [2024-12-16 19:32:34.984601] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:50.751 [2024-12-16 19:32:34.986771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.751 [2024-12-16 19:32:34.986797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:50.751 [2024-12-16 19:32:34.986809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.159 ms 00:30:50.751 [2024-12-16 19:32:34.986816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.751 [2024-12-16 19:32:34.986998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.751 [2024-12-16 19:32:34.987007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:50.751 [2024-12-16 19:32:34.987014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.163 ms 00:30:50.751 [2024-12-16 19:32:34.987020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.751 [2024-12-16 19:32:34.988510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.751 [2024-12-16 19:32:34.988536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:50.751 [2024-12-16 19:32:34.988544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.479 ms 00:30:50.751 [2024-12-16 19:32:34.988554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.751 [2024-12-16 19:32:34.989417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.751 [2024-12-16 19:32:34.989435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:50.751 [2024-12-16 19:32:34.989442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.837 ms 00:30:50.751 [2024-12-16 19:32:34.989449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.751 [2024-12-16 19:32:34.997721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.751 [2024-12-16 19:32:34.997747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:50.751 [2024-12-16 19:32:34.997756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.247 ms 00:30:50.751 [2024-12-16 19:32:34.997766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.751 [2024-12-16 19:32:35.002213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.751 [2024-12-16 19:32:35.002240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:50.751 [2024-12-16 19:32:35.002249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.418 ms 00:30:50.751 [2024-12-16 19:32:35.002257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.751 [2024-12-16 19:32:35.002517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.751 [2024-12-16 19:32:35.002556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:50.751 [2024-12-16 19:32:35.002566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:50.751 [2024-12-16 19:32:35.002579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.751 [2024-12-16 19:32:35.010515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.751 [2024-12-16 19:32:35.010543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:50.751 [2024-12-16 19:32:35.010558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.922 ms 00:30:50.751 [2024-12-16 19:32:35.010565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.751 [2024-12-16 19:32:35.018562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.751 [2024-12-16 19:32:35.018586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:50.751 [2024-12-16 19:32:35.018594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.970 ms 00:30:50.751 [2024-12-16 19:32:35.018600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.751 [2024-12-16 19:32:35.026251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.751 [2024-12-16 19:32:35.026275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:50.751 [2024-12-16 19:32:35.026283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.626 ms 00:30:50.751 [2024-12-16 19:32:35.026289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.751 [2024-12-16 19:32:35.034092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.751 [2024-12-16 19:32:35.034116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:50.751 [2024-12-16 19:32:35.034124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.756 ms 00:30:50.751 [2024-12-16 19:32:35.034130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.751 [2024-12-16 19:32:35.034154] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:50.751 [2024-12-16 19:32:35.034166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:50.751 [2024-12-16 19:32:35.034185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:50.751 [2024-12-16 19:32:35.034192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:50.751 [2024-12-16 19:32:35.034198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:50.751 [2024-12-16 19:32:35.034204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:50.751 [2024-12-16 19:32:35.034210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:50.751 [2024-12-16 19:32:35.034217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:50.751 [2024-12-16 19:32:35.034223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:50.751 [2024-12-16 19:32:35.034229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:50.751 [2024-12-16 19:32:35.034235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:50.751 [2024-12-16 19:32:35.034241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:50.751 [2024-12-16 19:32:35.034247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:50.751 [2024-12-16 19:32:35.034253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:50.751 [2024-12-16 19:32:35.034259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:50.751 [2024-12-16 19:32:35.034266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:50.751 [2024-12-16 19:32:35.034272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:50.752 [2024-12-16 19:32:35.034278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:50.752 [2024-12-16 19:32:35.034283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:50.752 [2024-12-16 19:32:35.034291] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:50.752 [2024-12-16 19:32:35.034297] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 91ef306d-1de1-4576-9514-e2069d79fd9f 00:30:50.752 [2024-12-16 19:32:35.034306] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:50.752 [2024-12-16 19:32:35.034313] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:50.752 [2024-12-16 19:32:35.034318] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:50.752 [2024-12-16 19:32:35.034324] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:50.752 [2024-12-16 19:32:35.034330] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:50.752 [2024-12-16 19:32:35.034336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:50.752 [2024-12-16 19:32:35.034345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:50.752 [2024-12-16 19:32:35.034351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:50.752 [2024-12-16 19:32:35.034356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:50.752 [2024-12-16 19:32:35.034363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.752 [2024-12-16 19:32:35.034370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:50.752 [2024-12-16 19:32:35.034376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.210 ms 00:30:50.752 [2024-12-16 19:32:35.034383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.752 [2024-12-16 19:32:35.044431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.752 [2024-12-16 19:32:35.044456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:50.752 [2024-12-16 19:32:35.044465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.026 ms 00:30:50.752 [2024-12-16 19:32:35.044471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.752 [2024-12-16 19:32:35.044760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.752 [2024-12-16 19:32:35.044767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:50.752 [2024-12-16 19:32:35.044774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.270 ms 00:30:50.752 [2024-12-16 19:32:35.044780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.752 [2024-12-16 19:32:35.079760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.752 [2024-12-16 19:32:35.079920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:50.752 [2024-12-16 19:32:35.079934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.752 [2024-12-16 19:32:35.079946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.752 [2024-12-16 19:32:35.079972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.752 [2024-12-16 19:32:35.079979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:50.752 [2024-12-16 19:32:35.079985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.752 [2024-12-16 19:32:35.079991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.752 [2024-12-16 19:32:35.080068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.752 [2024-12-16 19:32:35.080077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:50.752 [2024-12-16 19:32:35.080084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.752 [2024-12-16 19:32:35.080090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.752 [2024-12-16 19:32:35.080106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.752 [2024-12-16 19:32:35.080113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:50.752 [2024-12-16 19:32:35.080120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.752 [2024-12-16 19:32:35.080126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.010 [2024-12-16 19:32:35.142152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:51.010 [2024-12-16 19:32:35.142193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:51.010 [2024-12-16 19:32:35.142202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:51.010 [2024-12-16 19:32:35.142209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.010 [2024-12-16 19:32:35.193809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:51.010 [2024-12-16 19:32:35.193975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:51.010 [2024-12-16 19:32:35.193989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:51.010 [2024-12-16 19:32:35.193997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.010 [2024-12-16 19:32:35.194065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:51.010 [2024-12-16 19:32:35.194073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:51.010 [2024-12-16 19:32:35.194080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:51.010 [2024-12-16 19:32:35.194086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.010 [2024-12-16 19:32:35.194134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:51.010 [2024-12-16 19:32:35.194153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:51.010 [2024-12-16 19:32:35.194160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:51.010 [2024-12-16 19:32:35.194167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.010 [2024-12-16 19:32:35.194267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:51.010 [2024-12-16 19:32:35.194275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:51.010 [2024-12-16 19:32:35.194281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:51.010 [2024-12-16 19:32:35.194287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.010 [2024-12-16 19:32:35.194315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:51.010 [2024-12-16 19:32:35.194323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:51.010 [2024-12-16 19:32:35.194332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:51.010 [2024-12-16 19:32:35.194339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.010 [2024-12-16 19:32:35.194376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:51.010 [2024-12-16 19:32:35.194384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:51.011 [2024-12-16 19:32:35.194391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:51.011 [2024-12-16 19:32:35.194397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.011 [2024-12-16 19:32:35.194439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:51.011 [2024-12-16 19:32:35.194449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:51.011 [2024-12-16 19:32:35.194455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:51.011 [2024-12-16 19:32:35.194461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.011 [2024-12-16 19:32:35.194589] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 210.014 ms, result 0 00:30:51.578 19:32:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:51.578 19:32:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:51.578 19:32:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:30:51.578 19:32:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:30:51.578 19:32:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:30:51.578 19:32:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:51.578 Remove shared memory files 00:30:51.578 19:32:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:30:51.578 19:32:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:51.578 19:32:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:51.578 19:32:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:51.578 19:32:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84987 00:30:51.578 19:32:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:51.578 19:32:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:51.578 00:30:51.578 real 1m24.386s 00:30:51.578 user 1m55.262s 00:30:51.578 sys 0m20.621s 00:30:51.578 19:32:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:51.578 19:32:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:51.578 ************************************ 00:30:51.578 END TEST ftl_upgrade_shutdown 00:30:51.578 ************************************ 00:30:51.840 19:32:35 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:30:51.840 19:32:35 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:51.840 19:32:35 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:30:51.840 19:32:35 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:51.840 19:32:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:51.840 ************************************ 00:30:51.840 START TEST ftl_restore_fast 00:30:51.840 ************************************ 00:30:51.840 19:32:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:51.840 * Looking for test storage... 00:30:51.840 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lcov --version 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:51.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.840 --rc genhtml_branch_coverage=1 00:30:51.840 --rc genhtml_function_coverage=1 00:30:51.840 --rc genhtml_legend=1 00:30:51.840 --rc geninfo_all_blocks=1 00:30:51.840 --rc geninfo_unexecuted_blocks=1 00:30:51.840 00:30:51.840 ' 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:51.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.840 --rc genhtml_branch_coverage=1 00:30:51.840 --rc genhtml_function_coverage=1 00:30:51.840 --rc genhtml_legend=1 00:30:51.840 --rc geninfo_all_blocks=1 00:30:51.840 --rc geninfo_unexecuted_blocks=1 00:30:51.840 00:30:51.840 ' 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:51.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.840 --rc genhtml_branch_coverage=1 00:30:51.840 --rc genhtml_function_coverage=1 00:30:51.840 --rc genhtml_legend=1 00:30:51.840 --rc geninfo_all_blocks=1 00:30:51.840 --rc geninfo_unexecuted_blocks=1 00:30:51.840 00:30:51.840 ' 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:51.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.840 --rc genhtml_branch_coverage=1 00:30:51.840 --rc genhtml_function_coverage=1 00:30:51.840 --rc genhtml_legend=1 00:30:51.840 --rc geninfo_all_blocks=1 00:30:51.840 --rc geninfo_unexecuted_blocks=1 00:30:51.840 00:30:51.840 ' 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:51.840 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.cMofCBMQ8E 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=85492 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 85492 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 85492 ']' 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:51.841 19:32:36 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:30:52.101 [2024-12-16 19:32:36.195748] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:52.101 [2024-12-16 19:32:36.196034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85492 ] 00:30:52.101 [2024-12-16 19:32:36.357134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.361 [2024-12-16 19:32:36.471205] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.934 19:32:37 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:52.934 19:32:37 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:30:52.934 19:32:37 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:52.934 19:32:37 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:30:52.934 19:32:37 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:52.934 19:32:37 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:30:52.934 19:32:37 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:30:52.934 19:32:37 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:53.194 19:32:37 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:53.194 19:32:37 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:30:53.194 19:32:37 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:53.194 19:32:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:30:53.194 19:32:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:53.194 19:32:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:53.194 19:32:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:53.194 19:32:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:53.455 19:32:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:53.455 { 00:30:53.455 "name": "nvme0n1", 00:30:53.455 "aliases": [ 00:30:53.455 "b65e0d62-6d91-47a1-9069-577604a79d33" 00:30:53.455 ], 00:30:53.455 "product_name": "NVMe disk", 00:30:53.455 "block_size": 4096, 00:30:53.455 "num_blocks": 1310720, 00:30:53.455 "uuid": "b65e0d62-6d91-47a1-9069-577604a79d33", 00:30:53.455 "numa_id": -1, 00:30:53.455 "assigned_rate_limits": { 00:30:53.455 "rw_ios_per_sec": 0, 00:30:53.455 "rw_mbytes_per_sec": 0, 00:30:53.455 "r_mbytes_per_sec": 0, 00:30:53.455 "w_mbytes_per_sec": 0 00:30:53.455 }, 00:30:53.455 "claimed": true, 00:30:53.455 "claim_type": "read_many_write_one", 00:30:53.455 "zoned": false, 00:30:53.455 "supported_io_types": { 00:30:53.455 "read": true, 00:30:53.455 "write": true, 00:30:53.455 "unmap": true, 00:30:53.455 "flush": true, 00:30:53.455 "reset": true, 00:30:53.455 "nvme_admin": true, 00:30:53.455 "nvme_io": true, 00:30:53.455 "nvme_io_md": false, 00:30:53.455 "write_zeroes": true, 00:30:53.455 "zcopy": false, 00:30:53.455 "get_zone_info": false, 00:30:53.455 "zone_management": false, 00:30:53.455 "zone_append": false, 00:30:53.455 "compare": true, 00:30:53.455 "compare_and_write": false, 00:30:53.455 "abort": true, 00:30:53.455 "seek_hole": false, 00:30:53.455 "seek_data": false, 00:30:53.455 "copy": true, 00:30:53.455 "nvme_iov_md": false 00:30:53.455 }, 00:30:53.455 "driver_specific": { 00:30:53.455 "nvme": [ 00:30:53.455 { 00:30:53.455 "pci_address": "0000:00:11.0", 00:30:53.455 "trid": { 00:30:53.455 "trtype": "PCIe", 00:30:53.455 "traddr": "0000:00:11.0" 00:30:53.455 }, 00:30:53.455 "ctrlr_data": { 00:30:53.455 "cntlid": 0, 00:30:53.455 "vendor_id": "0x1b36", 00:30:53.455 "model_number": "QEMU NVMe Ctrl", 00:30:53.455 "serial_number": "12341", 00:30:53.455 "firmware_revision": "8.0.0", 00:30:53.455 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:53.455 "oacs": { 00:30:53.455 "security": 0, 00:30:53.455 "format": 1, 00:30:53.455 "firmware": 0, 00:30:53.455 "ns_manage": 1 00:30:53.455 }, 00:30:53.455 "multi_ctrlr": false, 00:30:53.455 "ana_reporting": false 00:30:53.455 }, 00:30:53.455 "vs": { 00:30:53.455 "nvme_version": "1.4" 00:30:53.455 }, 00:30:53.455 "ns_data": { 00:30:53.455 "id": 1, 00:30:53.455 "can_share": false 00:30:53.455 } 00:30:53.455 } 00:30:53.455 ], 00:30:53.455 "mp_policy": "active_passive" 00:30:53.455 } 00:30:53.455 } 00:30:53.455 ]' 00:30:53.455 19:32:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:53.455 19:32:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:53.455 19:32:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:53.455 19:32:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:53.455 19:32:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:53.455 19:32:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:30:53.455 19:32:37 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:30:53.455 19:32:37 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:53.455 19:32:37 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:30:53.455 19:32:37 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:53.455 19:32:37 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:53.717 19:32:37 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=d6de574c-c813-48fc-be88-b3751b63f243 00:30:53.717 19:32:37 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:30:53.717 19:32:37 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d6de574c-c813-48fc-be88-b3751b63f243 00:30:53.978 19:32:38 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:54.239 19:32:38 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=f6053ec0-8ab7-4721-8648-2747d5401920 00:30:54.239 19:32:38 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f6053ec0-8ab7-4721-8648-2747d5401920 00:30:54.500 19:32:38 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=02b399bd-4e1e-4541-b946-575d9daf9e46 00:30:54.500 19:32:38 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:30:54.500 19:32:38 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 02b399bd-4e1e-4541-b946-575d9daf9e46 00:30:54.500 19:32:38 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:30:54.500 19:32:38 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:54.500 19:32:38 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=02b399bd-4e1e-4541-b946-575d9daf9e46 00:30:54.500 19:32:38 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:30:54.500 19:32:38 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 02b399bd-4e1e-4541-b946-575d9daf9e46 00:30:54.500 19:32:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=02b399bd-4e1e-4541-b946-575d9daf9e46 00:30:54.500 19:32:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:54.500 19:32:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:54.500 19:32:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:54.500 19:32:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 02b399bd-4e1e-4541-b946-575d9daf9e46 00:30:54.761 19:32:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:54.761 { 00:30:54.761 "name": "02b399bd-4e1e-4541-b946-575d9daf9e46", 00:30:54.761 "aliases": [ 00:30:54.761 "lvs/nvme0n1p0" 00:30:54.761 ], 00:30:54.761 "product_name": "Logical Volume", 00:30:54.761 "block_size": 4096, 00:30:54.761 "num_blocks": 26476544, 00:30:54.761 "uuid": "02b399bd-4e1e-4541-b946-575d9daf9e46", 00:30:54.761 "assigned_rate_limits": { 00:30:54.761 "rw_ios_per_sec": 0, 00:30:54.761 "rw_mbytes_per_sec": 0, 00:30:54.761 "r_mbytes_per_sec": 0, 00:30:54.761 "w_mbytes_per_sec": 0 00:30:54.761 }, 00:30:54.761 "claimed": false, 00:30:54.761 "zoned": false, 00:30:54.761 "supported_io_types": { 00:30:54.761 "read": true, 00:30:54.761 "write": true, 00:30:54.761 "unmap": true, 00:30:54.761 "flush": false, 00:30:54.761 "reset": true, 00:30:54.761 "nvme_admin": false, 00:30:54.761 "nvme_io": false, 00:30:54.761 "nvme_io_md": false, 00:30:54.761 "write_zeroes": true, 00:30:54.761 "zcopy": false, 00:30:54.761 "get_zone_info": false, 00:30:54.761 "zone_management": false, 00:30:54.761 "zone_append": false, 00:30:54.761 "compare": false, 00:30:54.761 "compare_and_write": false, 00:30:54.761 "abort": false, 00:30:54.761 "seek_hole": true, 00:30:54.761 "seek_data": true, 00:30:54.761 "copy": false, 00:30:54.761 "nvme_iov_md": false 00:30:54.761 }, 00:30:54.761 "driver_specific": { 00:30:54.761 "lvol": { 00:30:54.761 "lvol_store_uuid": "f6053ec0-8ab7-4721-8648-2747d5401920", 00:30:54.761 "base_bdev": "nvme0n1", 00:30:54.761 "thin_provision": true, 00:30:54.761 "num_allocated_clusters": 0, 00:30:54.761 "snapshot": false, 00:30:54.761 "clone": false, 00:30:54.761 "esnap_clone": false 00:30:54.761 } 00:30:54.761 } 00:30:54.761 } 00:30:54.761 ]' 00:30:54.761 19:32:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:54.761 19:32:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:54.761 19:32:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:54.761 19:32:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:54.761 19:32:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:54.761 19:32:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:54.761 19:32:38 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:30:54.761 19:32:38 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:30:54.761 19:32:38 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:55.022 19:32:39 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:55.022 19:32:39 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:55.022 19:32:39 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 02b399bd-4e1e-4541-b946-575d9daf9e46 00:30:55.022 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=02b399bd-4e1e-4541-b946-575d9daf9e46 00:30:55.022 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:55.022 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:55.022 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:55.022 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 02b399bd-4e1e-4541-b946-575d9daf9e46 00:30:55.283 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:55.283 { 00:30:55.283 "name": "02b399bd-4e1e-4541-b946-575d9daf9e46", 00:30:55.283 "aliases": [ 00:30:55.283 "lvs/nvme0n1p0" 00:30:55.283 ], 00:30:55.283 "product_name": "Logical Volume", 00:30:55.283 "block_size": 4096, 00:30:55.283 "num_blocks": 26476544, 00:30:55.283 "uuid": "02b399bd-4e1e-4541-b946-575d9daf9e46", 00:30:55.283 "assigned_rate_limits": { 00:30:55.283 "rw_ios_per_sec": 0, 00:30:55.283 "rw_mbytes_per_sec": 0, 00:30:55.283 "r_mbytes_per_sec": 0, 00:30:55.283 "w_mbytes_per_sec": 0 00:30:55.283 }, 00:30:55.283 "claimed": false, 00:30:55.283 "zoned": false, 00:30:55.283 "supported_io_types": { 00:30:55.283 "read": true, 00:30:55.283 "write": true, 00:30:55.283 "unmap": true, 00:30:55.283 "flush": false, 00:30:55.283 "reset": true, 00:30:55.283 "nvme_admin": false, 00:30:55.283 "nvme_io": false, 00:30:55.283 "nvme_io_md": false, 00:30:55.283 "write_zeroes": true, 00:30:55.283 "zcopy": false, 00:30:55.283 "get_zone_info": false, 00:30:55.283 "zone_management": false, 00:30:55.283 "zone_append": false, 00:30:55.283 "compare": false, 00:30:55.283 "compare_and_write": false, 00:30:55.283 "abort": false, 00:30:55.283 "seek_hole": true, 00:30:55.283 "seek_data": true, 00:30:55.283 "copy": false, 00:30:55.283 "nvme_iov_md": false 00:30:55.283 }, 00:30:55.283 "driver_specific": { 00:30:55.283 "lvol": { 00:30:55.283 "lvol_store_uuid": "f6053ec0-8ab7-4721-8648-2747d5401920", 00:30:55.283 "base_bdev": "nvme0n1", 00:30:55.283 "thin_provision": true, 00:30:55.283 "num_allocated_clusters": 0, 00:30:55.283 "snapshot": false, 00:30:55.283 "clone": false, 00:30:55.283 "esnap_clone": false 00:30:55.283 } 00:30:55.283 } 00:30:55.283 } 00:30:55.283 ]' 00:30:55.283 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:55.283 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:55.283 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:55.283 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:55.283 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:55.283 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:55.283 19:32:39 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:30:55.283 19:32:39 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:55.543 19:32:39 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:30:55.543 19:32:39 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 02b399bd-4e1e-4541-b946-575d9daf9e46 00:30:55.543 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=02b399bd-4e1e-4541-b946-575d9daf9e46 00:30:55.543 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:55.543 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:55.543 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:55.543 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 02b399bd-4e1e-4541-b946-575d9daf9e46 00:30:55.543 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:55.543 { 00:30:55.543 "name": "02b399bd-4e1e-4541-b946-575d9daf9e46", 00:30:55.543 "aliases": [ 00:30:55.543 "lvs/nvme0n1p0" 00:30:55.543 ], 00:30:55.543 "product_name": "Logical Volume", 00:30:55.543 "block_size": 4096, 00:30:55.543 "num_blocks": 26476544, 00:30:55.543 "uuid": "02b399bd-4e1e-4541-b946-575d9daf9e46", 00:30:55.543 "assigned_rate_limits": { 00:30:55.543 "rw_ios_per_sec": 0, 00:30:55.543 "rw_mbytes_per_sec": 0, 00:30:55.543 "r_mbytes_per_sec": 0, 00:30:55.543 "w_mbytes_per_sec": 0 00:30:55.543 }, 00:30:55.543 "claimed": false, 00:30:55.543 "zoned": false, 00:30:55.543 "supported_io_types": { 00:30:55.544 "read": true, 00:30:55.544 "write": true, 00:30:55.544 "unmap": true, 00:30:55.544 "flush": false, 00:30:55.544 "reset": true, 00:30:55.544 "nvme_admin": false, 00:30:55.544 "nvme_io": false, 00:30:55.544 "nvme_io_md": false, 00:30:55.544 "write_zeroes": true, 00:30:55.544 "zcopy": false, 00:30:55.544 "get_zone_info": false, 00:30:55.544 "zone_management": false, 00:30:55.544 "zone_append": false, 00:30:55.544 "compare": false, 00:30:55.544 "compare_and_write": false, 00:30:55.544 "abort": false, 00:30:55.544 "seek_hole": true, 00:30:55.544 "seek_data": true, 00:30:55.544 "copy": false, 00:30:55.544 "nvme_iov_md": false 00:30:55.544 }, 00:30:55.544 "driver_specific": { 00:30:55.544 "lvol": { 00:30:55.544 "lvol_store_uuid": "f6053ec0-8ab7-4721-8648-2747d5401920", 00:30:55.544 "base_bdev": "nvme0n1", 00:30:55.544 "thin_provision": true, 00:30:55.544 "num_allocated_clusters": 0, 00:30:55.544 "snapshot": false, 00:30:55.544 "clone": false, 00:30:55.544 "esnap_clone": false 00:30:55.544 } 00:30:55.544 } 00:30:55.544 } 00:30:55.544 ]' 00:30:55.544 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:55.544 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:55.544 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:55.806 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:55.806 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:55.806 19:32:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:55.806 19:32:39 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:30:55.806 19:32:39 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 02b399bd-4e1e-4541-b946-575d9daf9e46 --l2p_dram_limit 10' 00:30:55.806 19:32:39 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:30:55.806 19:32:39 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:30:55.806 19:32:39 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:30:55.806 19:32:39 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:30:55.806 19:32:39 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:30:55.806 19:32:39 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 02b399bd-4e1e-4541-b946-575d9daf9e46 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:30:55.806 [2024-12-16 19:32:40.097316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.806 [2024-12-16 19:32:40.097357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:55.806 [2024-12-16 19:32:40.097369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:55.806 [2024-12-16 19:32:40.097377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.806 [2024-12-16 19:32:40.097424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.806 [2024-12-16 19:32:40.097432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:55.806 [2024-12-16 19:32:40.097440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:55.806 [2024-12-16 19:32:40.097446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.806 [2024-12-16 19:32:40.097465] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:55.806 [2024-12-16 19:32:40.098021] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:55.806 [2024-12-16 19:32:40.098045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.806 [2024-12-16 19:32:40.098052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:55.806 [2024-12-16 19:32:40.098060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:30:55.806 [2024-12-16 19:32:40.098066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.806 [2024-12-16 19:32:40.098117] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6c279ede-0f39-4354-bc49-0382ad208b2d 00:30:55.806 [2024-12-16 19:32:40.099078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.806 [2024-12-16 19:32:40.099112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:55.806 [2024-12-16 19:32:40.099121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:30:55.806 [2024-12-16 19:32:40.099129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.806 [2024-12-16 19:32:40.103939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.806 [2024-12-16 19:32:40.103970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:55.806 [2024-12-16 19:32:40.103978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.778 ms 00:30:55.806 [2024-12-16 19:32:40.103985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.806 [2024-12-16 19:32:40.104051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.806 [2024-12-16 19:32:40.104060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:55.806 [2024-12-16 19:32:40.104066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:30:55.806 [2024-12-16 19:32:40.104075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.807 [2024-12-16 19:32:40.104116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.807 [2024-12-16 19:32:40.104126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:55.807 [2024-12-16 19:32:40.104131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:55.807 [2024-12-16 19:32:40.104140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.807 [2024-12-16 19:32:40.104157] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:55.807 [2024-12-16 19:32:40.107053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.807 [2024-12-16 19:32:40.107080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:55.807 [2024-12-16 19:32:40.107090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.900 ms 00:30:55.807 [2024-12-16 19:32:40.107096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.807 [2024-12-16 19:32:40.107129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.807 [2024-12-16 19:32:40.107136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:55.807 [2024-12-16 19:32:40.107143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:55.807 [2024-12-16 19:32:40.107149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.807 [2024-12-16 19:32:40.107163] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:55.807 [2024-12-16 19:32:40.107280] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:55.807 [2024-12-16 19:32:40.107299] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:55.807 [2024-12-16 19:32:40.107308] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:55.807 [2024-12-16 19:32:40.107317] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:55.807 [2024-12-16 19:32:40.107324] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:55.807 [2024-12-16 19:32:40.107332] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:55.807 [2024-12-16 19:32:40.107337] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:55.807 [2024-12-16 19:32:40.107347] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:55.807 [2024-12-16 19:32:40.107352] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:55.807 [2024-12-16 19:32:40.107359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.807 [2024-12-16 19:32:40.107370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:55.807 [2024-12-16 19:32:40.107377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:30:55.807 [2024-12-16 19:32:40.107383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.807 [2024-12-16 19:32:40.107450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.807 [2024-12-16 19:32:40.107456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:55.807 [2024-12-16 19:32:40.107463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:30:55.807 [2024-12-16 19:32:40.107469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.807 [2024-12-16 19:32:40.107545] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:55.807 [2024-12-16 19:32:40.107553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:55.807 [2024-12-16 19:32:40.107560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:55.807 [2024-12-16 19:32:40.107566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:55.807 [2024-12-16 19:32:40.107573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:55.807 [2024-12-16 19:32:40.107578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:55.807 [2024-12-16 19:32:40.107584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:55.807 [2024-12-16 19:32:40.107589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:55.807 [2024-12-16 19:32:40.107596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:55.807 [2024-12-16 19:32:40.107600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:55.807 [2024-12-16 19:32:40.107608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:55.807 [2024-12-16 19:32:40.107612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:55.807 [2024-12-16 19:32:40.107620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:55.807 [2024-12-16 19:32:40.107624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:55.807 [2024-12-16 19:32:40.107631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:55.807 [2024-12-16 19:32:40.107636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:55.807 [2024-12-16 19:32:40.107644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:55.807 [2024-12-16 19:32:40.107649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:55.807 [2024-12-16 19:32:40.107655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:55.807 [2024-12-16 19:32:40.107660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:55.807 [2024-12-16 19:32:40.107667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:55.807 [2024-12-16 19:32:40.107672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:55.807 [2024-12-16 19:32:40.107678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:55.807 [2024-12-16 19:32:40.107683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:55.807 [2024-12-16 19:32:40.107689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:55.807 [2024-12-16 19:32:40.107694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:55.807 [2024-12-16 19:32:40.107700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:55.807 [2024-12-16 19:32:40.107704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:55.807 [2024-12-16 19:32:40.107711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:55.807 [2024-12-16 19:32:40.107716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:55.807 [2024-12-16 19:32:40.107722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:55.807 [2024-12-16 19:32:40.107726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:55.807 [2024-12-16 19:32:40.107735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:55.807 [2024-12-16 19:32:40.107740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:55.807 [2024-12-16 19:32:40.107746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:55.807 [2024-12-16 19:32:40.107751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:55.807 [2024-12-16 19:32:40.107757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:55.807 [2024-12-16 19:32:40.107762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:55.807 [2024-12-16 19:32:40.107770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:55.807 [2024-12-16 19:32:40.107775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:55.807 [2024-12-16 19:32:40.107781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:55.807 [2024-12-16 19:32:40.107786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:55.807 [2024-12-16 19:32:40.107793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:55.807 [2024-12-16 19:32:40.107797] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:55.807 [2024-12-16 19:32:40.107804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:55.807 [2024-12-16 19:32:40.107809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:55.807 [2024-12-16 19:32:40.107816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:55.807 [2024-12-16 19:32:40.107822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:55.807 [2024-12-16 19:32:40.107829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:55.807 [2024-12-16 19:32:40.107834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:55.807 [2024-12-16 19:32:40.107841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:55.807 [2024-12-16 19:32:40.107845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:55.807 [2024-12-16 19:32:40.107851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:55.807 [2024-12-16 19:32:40.107857] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:55.807 [2024-12-16 19:32:40.107866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:55.807 [2024-12-16 19:32:40.107874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:55.807 [2024-12-16 19:32:40.107880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:55.807 [2024-12-16 19:32:40.107885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:55.807 [2024-12-16 19:32:40.107892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:55.807 [2024-12-16 19:32:40.107897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:55.807 [2024-12-16 19:32:40.107904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:55.807 [2024-12-16 19:32:40.107909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:55.807 [2024-12-16 19:32:40.107916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:55.807 [2024-12-16 19:32:40.107921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:55.807 [2024-12-16 19:32:40.107932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:55.807 [2024-12-16 19:32:40.107938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:55.807 [2024-12-16 19:32:40.107944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:55.807 [2024-12-16 19:32:40.107950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:55.807 [2024-12-16 19:32:40.107957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:55.807 [2024-12-16 19:32:40.107962] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:55.807 [2024-12-16 19:32:40.107969] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:55.807 [2024-12-16 19:32:40.107976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:55.807 [2024-12-16 19:32:40.107983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:55.807 [2024-12-16 19:32:40.107989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:55.807 [2024-12-16 19:32:40.107995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:55.807 [2024-12-16 19:32:40.108001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.807 [2024-12-16 19:32:40.108008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:55.807 [2024-12-16 19:32:40.108013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:30:55.807 [2024-12-16 19:32:40.108020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.807 [2024-12-16 19:32:40.108060] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:30:55.807 [2024-12-16 19:32:40.108071] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:59.112 [2024-12-16 19:32:42.742937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.112 [2024-12-16 19:32:42.743016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:59.112 [2024-12-16 19:32:42.743033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2634.864 ms 00:30:59.112 [2024-12-16 19:32:42.743045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.112 [2024-12-16 19:32:42.770729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.112 [2024-12-16 19:32:42.770773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:59.112 [2024-12-16 19:32:42.770785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.431 ms 00:30:59.112 [2024-12-16 19:32:42.770795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.112 [2024-12-16 19:32:42.770926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.112 [2024-12-16 19:32:42.770938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:59.112 [2024-12-16 19:32:42.770947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:30:59.112 [2024-12-16 19:32:42.770961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.112 [2024-12-16 19:32:42.801584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.112 [2024-12-16 19:32:42.801621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:59.112 [2024-12-16 19:32:42.801632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.589 ms 00:30:59.112 [2024-12-16 19:32:42.801640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.112 [2024-12-16 19:32:42.801668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.112 [2024-12-16 19:32:42.801681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:59.112 [2024-12-16 19:32:42.801690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:59.112 [2024-12-16 19:32:42.801705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.112 [2024-12-16 19:32:42.802066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.112 [2024-12-16 19:32:42.802095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:59.112 [2024-12-16 19:32:42.802103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:30:59.112 [2024-12-16 19:32:42.802112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.112 [2024-12-16 19:32:42.802227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.112 [2024-12-16 19:32:42.802238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:59.112 [2024-12-16 19:32:42.802248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:30:59.112 [2024-12-16 19:32:42.802259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.112 [2024-12-16 19:32:42.816491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.112 [2024-12-16 19:32:42.816527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:59.112 [2024-12-16 19:32:42.816537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.215 ms 00:30:59.112 [2024-12-16 19:32:42.816546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.112 [2024-12-16 19:32:42.838995] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:59.112 [2024-12-16 19:32:42.842436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.112 [2024-12-16 19:32:42.842478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:59.113 [2024-12-16 19:32:42.842496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.822 ms 00:30:59.113 [2024-12-16 19:32:42.842507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.113 [2024-12-16 19:32:42.922726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.113 [2024-12-16 19:32:42.922775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:59.113 [2024-12-16 19:32:42.922790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.172 ms 00:30:59.113 [2024-12-16 19:32:42.922798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.113 [2024-12-16 19:32:42.922981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.113 [2024-12-16 19:32:42.922999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:59.113 [2024-12-16 19:32:42.923013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:30:59.113 [2024-12-16 19:32:42.923021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.113 [2024-12-16 19:32:42.947354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.113 [2024-12-16 19:32:42.947396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:59.113 [2024-12-16 19:32:42.947410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.284 ms 00:30:59.113 [2024-12-16 19:32:42.947418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.113 [2024-12-16 19:32:42.970832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.113 [2024-12-16 19:32:42.970873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:59.113 [2024-12-16 19:32:42.970886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.366 ms 00:30:59.113 [2024-12-16 19:32:42.970893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.113 [2024-12-16 19:32:42.971494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.113 [2024-12-16 19:32:42.971513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:59.113 [2024-12-16 19:32:42.971524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:30:59.113 [2024-12-16 19:32:42.971534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.113 [2024-12-16 19:32:43.051833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.113 [2024-12-16 19:32:43.051887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:59.113 [2024-12-16 19:32:43.051908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.255 ms 00:30:59.113 [2024-12-16 19:32:43.051917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.113 [2024-12-16 19:32:43.079270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.113 [2024-12-16 19:32:43.079315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:59.113 [2024-12-16 19:32:43.079331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.234 ms 00:30:59.113 [2024-12-16 19:32:43.079339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.113 [2024-12-16 19:32:43.105278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.113 [2024-12-16 19:32:43.105327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:59.113 [2024-12-16 19:32:43.105342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.881 ms 00:30:59.113 [2024-12-16 19:32:43.105349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.113 [2024-12-16 19:32:43.131386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.113 [2024-12-16 19:32:43.131433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:59.113 [2024-12-16 19:32:43.131447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.980 ms 00:30:59.113 [2024-12-16 19:32:43.131456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.113 [2024-12-16 19:32:43.131512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.113 [2024-12-16 19:32:43.131523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:59.113 [2024-12-16 19:32:43.131539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:59.113 [2024-12-16 19:32:43.131547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.113 [2024-12-16 19:32:43.131645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.113 [2024-12-16 19:32:43.131660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:59.113 [2024-12-16 19:32:43.131671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:30:59.113 [2024-12-16 19:32:43.131679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.113 [2024-12-16 19:32:43.132952] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3035.154 ms, result 0 00:30:59.113 { 00:30:59.113 "name": "ftl0", 00:30:59.113 "uuid": "6c279ede-0f39-4354-bc49-0382ad208b2d" 00:30:59.113 } 00:30:59.113 19:32:43 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:30:59.113 19:32:43 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:30:59.113 19:32:43 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:30:59.113 19:32:43 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:59.375 [2024-12-16 19:32:43.572215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.375 [2024-12-16 19:32:43.572286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:59.375 [2024-12-16 19:32:43.572301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:59.375 [2024-12-16 19:32:43.572312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.375 [2024-12-16 19:32:43.572352] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:59.375 [2024-12-16 19:32:43.575459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.375 [2024-12-16 19:32:43.575501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:59.375 [2024-12-16 19:32:43.575515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.081 ms 00:30:59.375 [2024-12-16 19:32:43.575523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.375 [2024-12-16 19:32:43.575802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.375 [2024-12-16 19:32:43.575816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:59.375 [2024-12-16 19:32:43.575827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:30:59.375 [2024-12-16 19:32:43.575835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.375 [2024-12-16 19:32:43.579080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.375 [2024-12-16 19:32:43.579102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:59.375 [2024-12-16 19:32:43.579115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.227 ms 00:30:59.375 [2024-12-16 19:32:43.579123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.375 [2024-12-16 19:32:43.585241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.375 [2024-12-16 19:32:43.585284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:59.375 [2024-12-16 19:32:43.585300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.095 ms 00:30:59.375 [2024-12-16 19:32:43.585307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.375 [2024-12-16 19:32:43.611760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.375 [2024-12-16 19:32:43.611812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:59.375 [2024-12-16 19:32:43.611827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.365 ms 00:30:59.375 [2024-12-16 19:32:43.611834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.375 [2024-12-16 19:32:43.629870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.375 [2024-12-16 19:32:43.629930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:59.375 [2024-12-16 19:32:43.629945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.976 ms 00:30:59.375 [2024-12-16 19:32:43.629953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.375 [2024-12-16 19:32:43.630133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.375 [2024-12-16 19:32:43.630146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:59.375 [2024-12-16 19:32:43.630157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:30:59.375 [2024-12-16 19:32:43.630165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.375 [2024-12-16 19:32:43.655848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.375 [2024-12-16 19:32:43.655895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:59.375 [2024-12-16 19:32:43.655908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.640 ms 00:30:59.375 [2024-12-16 19:32:43.655915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.375 [2024-12-16 19:32:43.681022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.375 [2024-12-16 19:32:43.681070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:59.376 [2024-12-16 19:32:43.681085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.053 ms 00:30:59.376 [2024-12-16 19:32:43.681092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.376 [2024-12-16 19:32:43.705894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.376 [2024-12-16 19:32:43.705939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:59.376 [2024-12-16 19:32:43.705952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.748 ms 00:30:59.376 [2024-12-16 19:32:43.705960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.638 [2024-12-16 19:32:43.730976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.638 [2024-12-16 19:32:43.731034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:59.638 [2024-12-16 19:32:43.731049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.916 ms 00:30:59.638 [2024-12-16 19:32:43.731057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.638 [2024-12-16 19:32:43.731106] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:59.638 [2024-12-16 19:32:43.731122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:59.638 [2024-12-16 19:32:43.731478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.731999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.732006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.732015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.732022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.732032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:59.639 [2024-12-16 19:32:43.732048] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:59.639 [2024-12-16 19:32:43.732058] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6c279ede-0f39-4354-bc49-0382ad208b2d 00:30:59.639 [2024-12-16 19:32:43.732066] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:59.639 [2024-12-16 19:32:43.732078] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:59.639 [2024-12-16 19:32:43.732087] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:59.639 [2024-12-16 19:32:43.732097] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:59.639 [2024-12-16 19:32:43.732104] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:59.639 [2024-12-16 19:32:43.732114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:59.639 [2024-12-16 19:32:43.732127] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:59.639 [2024-12-16 19:32:43.732136] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:59.639 [2024-12-16 19:32:43.732143] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:59.639 [2024-12-16 19:32:43.732153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.639 [2024-12-16 19:32:43.732161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:59.639 [2024-12-16 19:32:43.732182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.049 ms 00:30:59.639 [2024-12-16 19:32:43.732192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.639 [2024-12-16 19:32:43.745973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.639 [2024-12-16 19:32:43.746014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:59.639 [2024-12-16 19:32:43.746028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.733 ms 00:30:59.639 [2024-12-16 19:32:43.746037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.639 [2024-12-16 19:32:43.746466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.639 [2024-12-16 19:32:43.746487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:59.639 [2024-12-16 19:32:43.746502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:30:59.639 [2024-12-16 19:32:43.746509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.639 [2024-12-16 19:32:43.793004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.639 [2024-12-16 19:32:43.793053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:59.639 [2024-12-16 19:32:43.793067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.639 [2024-12-16 19:32:43.793076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.639 [2024-12-16 19:32:43.793146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.639 [2024-12-16 19:32:43.793156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:59.639 [2024-12-16 19:32:43.793190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.639 [2024-12-16 19:32:43.793199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.639 [2024-12-16 19:32:43.793299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.639 [2024-12-16 19:32:43.793310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:59.639 [2024-12-16 19:32:43.793346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.639 [2024-12-16 19:32:43.793355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.639 [2024-12-16 19:32:43.793378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.639 [2024-12-16 19:32:43.793386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:59.639 [2024-12-16 19:32:43.793396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.640 [2024-12-16 19:32:43.793408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.640 [2024-12-16 19:32:43.877815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.640 [2024-12-16 19:32:43.877874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:59.640 [2024-12-16 19:32:43.877891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.640 [2024-12-16 19:32:43.877899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.640 [2024-12-16 19:32:43.946827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.640 [2024-12-16 19:32:43.946883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:59.640 [2024-12-16 19:32:43.946898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.640 [2024-12-16 19:32:43.946910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.640 [2024-12-16 19:32:43.946995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.640 [2024-12-16 19:32:43.947006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:59.640 [2024-12-16 19:32:43.947018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.640 [2024-12-16 19:32:43.947026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.640 [2024-12-16 19:32:43.947100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.640 [2024-12-16 19:32:43.947111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:59.640 [2024-12-16 19:32:43.947121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.640 [2024-12-16 19:32:43.947129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.640 [2024-12-16 19:32:43.947266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.640 [2024-12-16 19:32:43.947278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:59.640 [2024-12-16 19:32:43.947290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.640 [2024-12-16 19:32:43.947298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.640 [2024-12-16 19:32:43.947335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.640 [2024-12-16 19:32:43.947346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:59.640 [2024-12-16 19:32:43.947356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.640 [2024-12-16 19:32:43.947364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.640 [2024-12-16 19:32:43.947410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.640 [2024-12-16 19:32:43.947419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:59.640 [2024-12-16 19:32:43.947430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.640 [2024-12-16 19:32:43.947439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.640 [2024-12-16 19:32:43.947491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.640 [2024-12-16 19:32:43.947502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:59.640 [2024-12-16 19:32:43.947513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.640 [2024-12-16 19:32:43.947524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.640 [2024-12-16 19:32:43.947676] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.431 ms, result 0 00:30:59.640 true 00:30:59.640 19:32:43 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 85492 00:30:59.640 19:32:43 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 85492 ']' 00:30:59.640 19:32:43 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 85492 00:30:59.640 19:32:43 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:30:59.640 19:32:43 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:59.640 19:32:43 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85492 00:30:59.901 killing process with pid 85492 00:30:59.901 19:32:43 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:59.901 19:32:43 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:59.901 19:32:43 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85492' 00:30:59.901 19:32:43 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 85492 00:30:59.901 19:32:43 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 85492 00:31:08.043 19:32:51 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:31:11.348 262144+0 records in 00:31:11.348 262144+0 records out 00:31:11.348 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.4142 s, 243 MB/s 00:31:11.348 19:32:55 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:13.263 19:32:57 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:13.525 [2024-12-16 19:32:57.646264] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:13.525 [2024-12-16 19:32:57.646457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85714 ] 00:31:13.525 [2024-12-16 19:32:57.798836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.786 [2024-12-16 19:32:57.897485] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.083 [2024-12-16 19:32:58.173297] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:14.083 [2024-12-16 19:32:58.173390] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:14.083 [2024-12-16 19:32:58.334848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.083 [2024-12-16 19:32:58.334917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:14.083 [2024-12-16 19:32:58.334934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:14.083 [2024-12-16 19:32:58.334944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.083 [2024-12-16 19:32:58.335002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.083 [2024-12-16 19:32:58.335015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:14.083 [2024-12-16 19:32:58.335024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:31:14.083 [2024-12-16 19:32:58.335033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.083 [2024-12-16 19:32:58.335054] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:14.083 [2024-12-16 19:32:58.335853] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:14.083 [2024-12-16 19:32:58.335881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.083 [2024-12-16 19:32:58.335890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:14.083 [2024-12-16 19:32:58.335900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.833 ms 00:31:14.083 [2024-12-16 19:32:58.335909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.083 [2024-12-16 19:32:58.337702] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:14.083 [2024-12-16 19:32:58.351977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.083 [2024-12-16 19:32:58.352031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:14.083 [2024-12-16 19:32:58.352047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.278 ms 00:31:14.084 [2024-12-16 19:32:58.352056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.084 [2024-12-16 19:32:58.352149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.084 [2024-12-16 19:32:58.352160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:14.084 [2024-12-16 19:32:58.352194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:31:14.084 [2024-12-16 19:32:58.352203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.084 [2024-12-16 19:32:58.360656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.084 [2024-12-16 19:32:58.360704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:14.084 [2024-12-16 19:32:58.360716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.368 ms 00:31:14.084 [2024-12-16 19:32:58.360731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.084 [2024-12-16 19:32:58.360812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.084 [2024-12-16 19:32:58.360821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:14.084 [2024-12-16 19:32:58.360830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:31:14.084 [2024-12-16 19:32:58.360838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.084 [2024-12-16 19:32:58.360885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.084 [2024-12-16 19:32:58.360895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:14.084 [2024-12-16 19:32:58.360904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:14.084 [2024-12-16 19:32:58.360912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.084 [2024-12-16 19:32:58.360938] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:14.084 [2024-12-16 19:32:58.365152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.084 [2024-12-16 19:32:58.365204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:14.084 [2024-12-16 19:32:58.365219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.219 ms 00:31:14.084 [2024-12-16 19:32:58.365227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.084 [2024-12-16 19:32:58.365270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.084 [2024-12-16 19:32:58.365279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:14.084 [2024-12-16 19:32:58.365288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:31:14.084 [2024-12-16 19:32:58.365296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.084 [2024-12-16 19:32:58.365352] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:14.084 [2024-12-16 19:32:58.365377] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:14.084 [2024-12-16 19:32:58.365416] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:14.084 [2024-12-16 19:32:58.365434] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:14.084 [2024-12-16 19:32:58.365541] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:14.084 [2024-12-16 19:32:58.365552] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:14.084 [2024-12-16 19:32:58.365562] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:14.084 [2024-12-16 19:32:58.365573] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:14.084 [2024-12-16 19:32:58.365584] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:14.084 [2024-12-16 19:32:58.365593] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:14.084 [2024-12-16 19:32:58.365601] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:14.084 [2024-12-16 19:32:58.365609] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:14.084 [2024-12-16 19:32:58.365620] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:14.084 [2024-12-16 19:32:58.365628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.084 [2024-12-16 19:32:58.365637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:14.084 [2024-12-16 19:32:58.365645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:31:14.084 [2024-12-16 19:32:58.365652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.084 [2024-12-16 19:32:58.365738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.084 [2024-12-16 19:32:58.365747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:14.084 [2024-12-16 19:32:58.365756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:31:14.084 [2024-12-16 19:32:58.365763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.084 [2024-12-16 19:32:58.365863] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:14.084 [2024-12-16 19:32:58.365875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:14.084 [2024-12-16 19:32:58.365883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:14.084 [2024-12-16 19:32:58.365892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:14.084 [2024-12-16 19:32:58.365902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:14.084 [2024-12-16 19:32:58.365910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:14.084 [2024-12-16 19:32:58.365918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:14.084 [2024-12-16 19:32:58.365925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:14.084 [2024-12-16 19:32:58.365933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:14.084 [2024-12-16 19:32:58.365940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:14.084 [2024-12-16 19:32:58.365950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:14.084 [2024-12-16 19:32:58.365957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:14.084 [2024-12-16 19:32:58.365964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:14.084 [2024-12-16 19:32:58.365983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:14.084 [2024-12-16 19:32:58.365990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:14.084 [2024-12-16 19:32:58.365997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:14.084 [2024-12-16 19:32:58.366004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:14.084 [2024-12-16 19:32:58.366012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:14.084 [2024-12-16 19:32:58.366019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:14.084 [2024-12-16 19:32:58.366026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:14.084 [2024-12-16 19:32:58.366033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:14.084 [2024-12-16 19:32:58.366041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:14.084 [2024-12-16 19:32:58.366048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:14.084 [2024-12-16 19:32:58.366055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:14.084 [2024-12-16 19:32:58.366062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:14.084 [2024-12-16 19:32:58.366069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:14.084 [2024-12-16 19:32:58.366076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:14.084 [2024-12-16 19:32:58.366083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:14.084 [2024-12-16 19:32:58.366090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:14.084 [2024-12-16 19:32:58.366099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:14.084 [2024-12-16 19:32:58.366106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:14.084 [2024-12-16 19:32:58.366113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:14.084 [2024-12-16 19:32:58.366120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:14.084 [2024-12-16 19:32:58.366128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:14.084 [2024-12-16 19:32:58.366135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:14.084 [2024-12-16 19:32:58.366143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:14.084 [2024-12-16 19:32:58.366150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:14.084 [2024-12-16 19:32:58.366158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:14.084 [2024-12-16 19:32:58.366165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:14.084 [2024-12-16 19:32:58.366201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:14.084 [2024-12-16 19:32:58.366209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:14.084 [2024-12-16 19:32:58.366217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:14.084 [2024-12-16 19:32:58.366224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:14.084 [2024-12-16 19:32:58.366232] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:14.084 [2024-12-16 19:32:58.366242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:14.084 [2024-12-16 19:32:58.366250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:14.084 [2024-12-16 19:32:58.366259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:14.084 [2024-12-16 19:32:58.366268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:14.084 [2024-12-16 19:32:58.366276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:14.084 [2024-12-16 19:32:58.366283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:14.084 [2024-12-16 19:32:58.366291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:14.084 [2024-12-16 19:32:58.366299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:14.084 [2024-12-16 19:32:58.366306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:14.084 [2024-12-16 19:32:58.366316] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:14.084 [2024-12-16 19:32:58.366325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:14.084 [2024-12-16 19:32:58.366336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:14.085 [2024-12-16 19:32:58.366345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:14.085 [2024-12-16 19:32:58.366352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:14.085 [2024-12-16 19:32:58.366359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:14.085 [2024-12-16 19:32:58.366367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:14.085 [2024-12-16 19:32:58.366374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:14.085 [2024-12-16 19:32:58.366382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:14.085 [2024-12-16 19:32:58.366390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:14.085 [2024-12-16 19:32:58.366397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:14.085 [2024-12-16 19:32:58.366405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:14.085 [2024-12-16 19:32:58.366412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:14.085 [2024-12-16 19:32:58.366419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:14.085 [2024-12-16 19:32:58.366426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:14.085 [2024-12-16 19:32:58.366433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:14.085 [2024-12-16 19:32:58.366440] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:14.085 [2024-12-16 19:32:58.366448] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:14.085 [2024-12-16 19:32:58.366458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:14.085 [2024-12-16 19:32:58.366465] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:14.085 [2024-12-16 19:32:58.366472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:14.085 [2024-12-16 19:32:58.366481] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:14.085 [2024-12-16 19:32:58.366489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.085 [2024-12-16 19:32:58.366496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:14.085 [2024-12-16 19:32:58.366506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:31:14.085 [2024-12-16 19:32:58.366514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.085 [2024-12-16 19:32:58.399017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.085 [2024-12-16 19:32:58.399298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:14.085 [2024-12-16 19:32:58.399322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.436 ms 00:31:14.085 [2024-12-16 19:32:58.399339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.085 [2024-12-16 19:32:58.399441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.085 [2024-12-16 19:32:58.399450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:14.085 [2024-12-16 19:32:58.399461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:14.085 [2024-12-16 19:32:58.399469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.371 [2024-12-16 19:32:58.451024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.371 [2024-12-16 19:32:58.451262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:14.371 [2024-12-16 19:32:58.451287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.489 ms 00:31:14.371 [2024-12-16 19:32:58.451296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.371 [2024-12-16 19:32:58.451350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.371 [2024-12-16 19:32:58.451360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:14.371 [2024-12-16 19:32:58.451384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:14.371 [2024-12-16 19:32:58.451393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.371 [2024-12-16 19:32:58.452007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.371 [2024-12-16 19:32:58.452032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:14.371 [2024-12-16 19:32:58.452043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:31:14.371 [2024-12-16 19:32:58.452051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.372 [2024-12-16 19:32:58.452238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.372 [2024-12-16 19:32:58.452251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:14.372 [2024-12-16 19:32:58.452264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:31:14.372 [2024-12-16 19:32:58.452272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.372 [2024-12-16 19:32:58.468306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.372 [2024-12-16 19:32:58.468357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:14.372 [2024-12-16 19:32:58.468370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.013 ms 00:31:14.372 [2024-12-16 19:32:58.468379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.372 [2024-12-16 19:32:58.482794] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:14.372 [2024-12-16 19:32:58.482995] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:14.372 [2024-12-16 19:32:58.483017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.372 [2024-12-16 19:32:58.483027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:14.372 [2024-12-16 19:32:58.483038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.522 ms 00:31:14.372 [2024-12-16 19:32:58.483045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.372 [2024-12-16 19:32:58.510021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.372 [2024-12-16 19:32:58.510086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:14.372 [2024-12-16 19:32:58.510101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.643 ms 00:31:14.372 [2024-12-16 19:32:58.510110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.372 [2024-12-16 19:32:58.523626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.372 [2024-12-16 19:32:58.523680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:14.372 [2024-12-16 19:32:58.523694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.417 ms 00:31:14.372 [2024-12-16 19:32:58.523701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.372 [2024-12-16 19:32:58.537023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.372 [2024-12-16 19:32:58.537075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:14.372 [2024-12-16 19:32:58.537087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.269 ms 00:31:14.372 [2024-12-16 19:32:58.537094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.372 [2024-12-16 19:32:58.537781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.372 [2024-12-16 19:32:58.537817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:14.372 [2024-12-16 19:32:58.537828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:31:14.372 [2024-12-16 19:32:58.537840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.372 [2024-12-16 19:32:58.605336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.372 [2024-12-16 19:32:58.605405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:14.372 [2024-12-16 19:32:58.605423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.474 ms 00:31:14.372 [2024-12-16 19:32:58.605439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.372 [2024-12-16 19:32:58.616603] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:14.372 [2024-12-16 19:32:58.619960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.372 [2024-12-16 19:32:58.620008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:14.372 [2024-12-16 19:32:58.620022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.458 ms 00:31:14.372 [2024-12-16 19:32:58.620031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.372 [2024-12-16 19:32:58.620125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.372 [2024-12-16 19:32:58.620137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:14.372 [2024-12-16 19:32:58.620147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:31:14.372 [2024-12-16 19:32:58.620155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.372 [2024-12-16 19:32:58.620256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.372 [2024-12-16 19:32:58.620268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:14.372 [2024-12-16 19:32:58.620278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:14.372 [2024-12-16 19:32:58.620286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.372 [2024-12-16 19:32:58.620317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.372 [2024-12-16 19:32:58.620328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:14.372 [2024-12-16 19:32:58.620336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:14.372 [2024-12-16 19:32:58.620345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.372 [2024-12-16 19:32:58.620379] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:14.372 [2024-12-16 19:32:58.620393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.372 [2024-12-16 19:32:58.620401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:14.372 [2024-12-16 19:32:58.620410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:14.372 [2024-12-16 19:32:58.620418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.372 [2024-12-16 19:32:58.647105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.372 [2024-12-16 19:32:58.647328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:14.372 [2024-12-16 19:32:58.647354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.663 ms 00:31:14.372 [2024-12-16 19:32:58.647371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.372 [2024-12-16 19:32:58.647455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.372 [2024-12-16 19:32:58.647466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:14.372 [2024-12-16 19:32:58.647475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:31:14.372 [2024-12-16 19:32:58.647484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.372 [2024-12-16 19:32:58.649068] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 313.730 ms, result 0 00:31:15.315  [2024-12-16T19:33:01.055Z] Copying: 15/1024 [MB] (15 MBps) [2024-12-16T19:33:01.997Z] Copying: 27/1024 [MB] (11 MBps) [2024-12-16T19:33:02.941Z] Copying: 43/1024 [MB] (16 MBps) [2024-12-16T19:33:03.886Z] Copying: 57/1024 [MB] (14 MBps) [2024-12-16T19:33:04.829Z] Copying: 68/1024 [MB] (11 MBps) [2024-12-16T19:33:05.770Z] Copying: 88/1024 [MB] (19 MBps) [2024-12-16T19:33:06.714Z] Copying: 100/1024 [MB] (12 MBps) [2024-12-16T19:33:08.100Z] Copying: 112368/1048576 [kB] (9160 kBps) [2024-12-16T19:33:08.673Z] Copying: 128/1024 [MB] (19 MBps) [2024-12-16T19:33:10.060Z] Copying: 144/1024 [MB] (16 MBps) [2024-12-16T19:33:11.001Z] Copying: 161/1024 [MB] (16 MBps) [2024-12-16T19:33:11.944Z] Copying: 179/1024 [MB] (17 MBps) [2024-12-16T19:33:12.887Z] Copying: 199/1024 [MB] (20 MBps) [2024-12-16T19:33:13.832Z] Copying: 225/1024 [MB] (26 MBps) [2024-12-16T19:33:14.774Z] Copying: 241/1024 [MB] (16 MBps) [2024-12-16T19:33:15.714Z] Copying: 253/1024 [MB] (11 MBps) [2024-12-16T19:33:17.101Z] Copying: 268/1024 [MB] (14 MBps) [2024-12-16T19:33:17.673Z] Copying: 282/1024 [MB] (14 MBps) [2024-12-16T19:33:19.061Z] Copying: 299/1024 [MB] (16 MBps) [2024-12-16T19:33:20.006Z] Copying: 314/1024 [MB] (15 MBps) [2024-12-16T19:33:20.949Z] Copying: 330/1024 [MB] (15 MBps) [2024-12-16T19:33:21.951Z] Copying: 347/1024 [MB] (17 MBps) [2024-12-16T19:33:22.907Z] Copying: 361/1024 [MB] (14 MBps) [2024-12-16T19:33:23.851Z] Copying: 381/1024 [MB] (20 MBps) [2024-12-16T19:33:24.795Z] Copying: 395/1024 [MB] (14 MBps) [2024-12-16T19:33:25.738Z] Copying: 407/1024 [MB] (12 MBps) [2024-12-16T19:33:26.681Z] Copying: 426/1024 [MB] (18 MBps) [2024-12-16T19:33:28.065Z] Copying: 438/1024 [MB] (11 MBps) [2024-12-16T19:33:29.009Z] Copying: 449/1024 [MB] (10 MBps) [2024-12-16T19:33:29.953Z] Copying: 459/1024 [MB] (10 MBps) [2024-12-16T19:33:30.897Z] Copying: 469/1024 [MB] (10 MBps) [2024-12-16T19:33:31.840Z] Copying: 480/1024 [MB] (10 MBps) [2024-12-16T19:33:32.783Z] Copying: 492/1024 [MB] (11 MBps) [2024-12-16T19:33:33.727Z] Copying: 514212/1048576 [kB] (10212 kBps) [2024-12-16T19:33:34.670Z] Copying: 533/1024 [MB] (30 MBps) [2024-12-16T19:33:36.064Z] Copying: 545/1024 [MB] (12 MBps) [2024-12-16T19:33:37.007Z] Copying: 555/1024 [MB] (10 MBps) [2024-12-16T19:33:37.952Z] Copying: 580/1024 [MB] (24 MBps) [2024-12-16T19:33:38.897Z] Copying: 597/1024 [MB] (16 MBps) [2024-12-16T19:33:39.840Z] Copying: 607/1024 [MB] (10 MBps) [2024-12-16T19:33:40.785Z] Copying: 627/1024 [MB] (19 MBps) [2024-12-16T19:33:41.731Z] Copying: 643/1024 [MB] (16 MBps) [2024-12-16T19:33:42.674Z] Copying: 653/1024 [MB] (10 MBps) [2024-12-16T19:33:44.062Z] Copying: 682/1024 [MB] (28 MBps) [2024-12-16T19:33:45.080Z] Copying: 695/1024 [MB] (12 MBps) [2024-12-16T19:33:46.021Z] Copying: 715/1024 [MB] (20 MBps) [2024-12-16T19:33:46.965Z] Copying: 735/1024 [MB] (19 MBps) [2024-12-16T19:33:47.906Z] Copying: 753/1024 [MB] (18 MBps) [2024-12-16T19:33:48.847Z] Copying: 777/1024 [MB] (23 MBps) [2024-12-16T19:33:49.788Z] Copying: 829/1024 [MB] (52 MBps) [2024-12-16T19:33:50.732Z] Copying: 880/1024 [MB] (51 MBps) [2024-12-16T19:33:51.672Z] Copying: 908/1024 [MB] (27 MBps) [2024-12-16T19:33:53.057Z] Copying: 935/1024 [MB] (26 MBps) [2024-12-16T19:33:54.000Z] Copying: 958/1024 [MB] (22 MBps) [2024-12-16T19:33:54.943Z] Copying: 974/1024 [MB] (16 MBps) [2024-12-16T19:33:55.515Z] Copying: 990/1024 [MB] (16 MBps) [2024-12-16T19:33:55.515Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-16 19:33:55.317510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.161 [2024-12-16 19:33:55.317546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:11.161 [2024-12-16 19:33:55.317557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:11.161 [2024-12-16 19:33:55.317563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.161 [2024-12-16 19:33:55.317580] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:11.161 [2024-12-16 19:33:55.319736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.161 [2024-12-16 19:33:55.319760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:11.161 [2024-12-16 19:33:55.319769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.146 ms 00:32:11.161 [2024-12-16 19:33:55.319779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.161 [2024-12-16 19:33:55.321189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.161 [2024-12-16 19:33:55.321213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:11.161 [2024-12-16 19:33:55.321220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.392 ms 00:32:11.161 [2024-12-16 19:33:55.321227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.161 [2024-12-16 19:33:55.321246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.161 [2024-12-16 19:33:55.321253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:11.161 [2024-12-16 19:33:55.321259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:11.161 [2024-12-16 19:33:55.321265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.161 [2024-12-16 19:33:55.321303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.161 [2024-12-16 19:33:55.321310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:11.161 [2024-12-16 19:33:55.321316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:11.161 [2024-12-16 19:33:55.321322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.161 [2024-12-16 19:33:55.321332] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:11.161 [2024-12-16 19:33:55.321341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:11.161 [2024-12-16 19:33:55.321759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:11.162 [2024-12-16 19:33:55.321930] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:11.162 [2024-12-16 19:33:55.321936] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6c279ede-0f39-4354-bc49-0382ad208b2d 00:32:11.162 [2024-12-16 19:33:55.321942] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:11.162 [2024-12-16 19:33:55.321948] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:11.162 [2024-12-16 19:33:55.321954] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:11.162 [2024-12-16 19:33:55.321962] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:11.162 [2024-12-16 19:33:55.321967] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:11.162 [2024-12-16 19:33:55.321973] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:11.162 [2024-12-16 19:33:55.321979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:11.162 [2024-12-16 19:33:55.321985] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:11.162 [2024-12-16 19:33:55.321990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:11.162 [2024-12-16 19:33:55.321995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.162 [2024-12-16 19:33:55.322001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:11.162 [2024-12-16 19:33:55.322007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.663 ms 00:32:11.162 [2024-12-16 19:33:55.322013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.162 [2024-12-16 19:33:55.331840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.162 [2024-12-16 19:33:55.331869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:11.162 [2024-12-16 19:33:55.331878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.817 ms 00:32:11.162 [2024-12-16 19:33:55.331884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.162 [2024-12-16 19:33:55.332152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.162 [2024-12-16 19:33:55.332162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:11.162 [2024-12-16 19:33:55.332168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:32:11.162 [2024-12-16 19:33:55.332182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.162 [2024-12-16 19:33:55.357775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.162 [2024-12-16 19:33:55.357882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:11.162 [2024-12-16 19:33:55.357895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.162 [2024-12-16 19:33:55.357901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.162 [2024-12-16 19:33:55.357944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.162 [2024-12-16 19:33:55.357951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:11.162 [2024-12-16 19:33:55.357957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.162 [2024-12-16 19:33:55.357962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.162 [2024-12-16 19:33:55.358011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.162 [2024-12-16 19:33:55.358022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:11.162 [2024-12-16 19:33:55.358028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.162 [2024-12-16 19:33:55.358035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.162 [2024-12-16 19:33:55.358045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.162 [2024-12-16 19:33:55.358051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:11.162 [2024-12-16 19:33:55.358059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.162 [2024-12-16 19:33:55.358065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.162 [2024-12-16 19:33:55.416504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.162 [2024-12-16 19:33:55.416636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:11.162 [2024-12-16 19:33:55.416649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.162 [2024-12-16 19:33:55.416656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.162 [2024-12-16 19:33:55.464314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.162 [2024-12-16 19:33:55.464343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:11.162 [2024-12-16 19:33:55.464351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.162 [2024-12-16 19:33:55.464357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.162 [2024-12-16 19:33:55.464394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.162 [2024-12-16 19:33:55.464402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:11.162 [2024-12-16 19:33:55.464411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.162 [2024-12-16 19:33:55.464417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.162 [2024-12-16 19:33:55.464457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.162 [2024-12-16 19:33:55.464464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:11.162 [2024-12-16 19:33:55.464470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.162 [2024-12-16 19:33:55.464476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.162 [2024-12-16 19:33:55.464530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.162 [2024-12-16 19:33:55.464537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:11.162 [2024-12-16 19:33:55.464548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.162 [2024-12-16 19:33:55.464556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.162 [2024-12-16 19:33:55.464574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.162 [2024-12-16 19:33:55.464581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:11.162 [2024-12-16 19:33:55.464587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.162 [2024-12-16 19:33:55.464593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.162 [2024-12-16 19:33:55.464619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.162 [2024-12-16 19:33:55.464626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:11.162 [2024-12-16 19:33:55.464631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.162 [2024-12-16 19:33:55.464639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.162 [2024-12-16 19:33:55.464671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.162 [2024-12-16 19:33:55.464678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:11.162 [2024-12-16 19:33:55.464684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.162 [2024-12-16 19:33:55.464690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.162 [2024-12-16 19:33:55.464778] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 147.247 ms, result 0 00:32:11.733 00:32:11.733 00:32:11.733 19:33:56 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:32:11.993 [2024-12-16 19:33:56.107520] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:32:11.993 [2024-12-16 19:33:56.107641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86317 ] 00:32:11.993 [2024-12-16 19:33:56.264436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.253 [2024-12-16 19:33:56.346755] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.254 [2024-12-16 19:33:56.555798] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:12.254 [2024-12-16 19:33:56.555845] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:12.516 [2024-12-16 19:33:56.702903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.516 [2024-12-16 19:33:56.703045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:12.516 [2024-12-16 19:33:56.703061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:12.516 [2024-12-16 19:33:56.703068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.516 [2024-12-16 19:33:56.703108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.516 [2024-12-16 19:33:56.703117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:12.516 [2024-12-16 19:33:56.703123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:32:12.516 [2024-12-16 19:33:56.703129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.516 [2024-12-16 19:33:56.703144] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:12.516 [2024-12-16 19:33:56.703691] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:12.516 [2024-12-16 19:33:56.703708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.516 [2024-12-16 19:33:56.703715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:12.516 [2024-12-16 19:33:56.703721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:32:12.516 [2024-12-16 19:33:56.703727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.516 [2024-12-16 19:33:56.703929] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:12.516 [2024-12-16 19:33:56.703946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.516 [2024-12-16 19:33:56.703954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:12.516 [2024-12-16 19:33:56.703962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:32:12.516 [2024-12-16 19:33:56.703967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.516 [2024-12-16 19:33:56.703998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.516 [2024-12-16 19:33:56.704004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:12.516 [2024-12-16 19:33:56.704010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:32:12.516 [2024-12-16 19:33:56.704016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.516 [2024-12-16 19:33:56.704255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.516 [2024-12-16 19:33:56.704264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:12.516 [2024-12-16 19:33:56.704271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:32:12.516 [2024-12-16 19:33:56.704277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.516 [2024-12-16 19:33:56.704324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.516 [2024-12-16 19:33:56.704330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:12.516 [2024-12-16 19:33:56.704336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:12.516 [2024-12-16 19:33:56.704341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.516 [2024-12-16 19:33:56.704356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.516 [2024-12-16 19:33:56.704364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:12.516 [2024-12-16 19:33:56.704371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:12.516 [2024-12-16 19:33:56.704376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.516 [2024-12-16 19:33:56.704389] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:12.516 [2024-12-16 19:33:56.707191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.516 [2024-12-16 19:33:56.707215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:12.516 [2024-12-16 19:33:56.707223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.805 ms 00:32:12.516 [2024-12-16 19:33:56.707229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.516 [2024-12-16 19:33:56.707258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.516 [2024-12-16 19:33:56.707265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:12.516 [2024-12-16 19:33:56.707270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:12.516 [2024-12-16 19:33:56.707276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.516 [2024-12-16 19:33:56.707310] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:12.516 [2024-12-16 19:33:56.707327] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:12.516 [2024-12-16 19:33:56.707352] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:12.516 [2024-12-16 19:33:56.707364] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:12.516 [2024-12-16 19:33:56.707442] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:12.516 [2024-12-16 19:33:56.707450] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:12.516 [2024-12-16 19:33:56.707457] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:12.516 [2024-12-16 19:33:56.707465] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:12.516 [2024-12-16 19:33:56.707474] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:12.516 [2024-12-16 19:33:56.707480] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:12.516 [2024-12-16 19:33:56.707485] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:12.516 [2024-12-16 19:33:56.707491] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:12.516 [2024-12-16 19:33:56.707496] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:12.516 [2024-12-16 19:33:56.707501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.516 [2024-12-16 19:33:56.707506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:12.516 [2024-12-16 19:33:56.707512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:32:12.516 [2024-12-16 19:33:56.707517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.516 [2024-12-16 19:33:56.707580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.516 [2024-12-16 19:33:56.707586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:12.516 [2024-12-16 19:33:56.707593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:32:12.516 [2024-12-16 19:33:56.707598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.516 [2024-12-16 19:33:56.707669] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:12.516 [2024-12-16 19:33:56.707676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:12.517 [2024-12-16 19:33:56.707681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:12.517 [2024-12-16 19:33:56.707687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:12.517 [2024-12-16 19:33:56.707693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:12.517 [2024-12-16 19:33:56.707699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:12.517 [2024-12-16 19:33:56.707704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:12.517 [2024-12-16 19:33:56.707710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:12.517 [2024-12-16 19:33:56.707716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:12.517 [2024-12-16 19:33:56.707721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:12.517 [2024-12-16 19:33:56.707726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:12.517 [2024-12-16 19:33:56.707731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:12.517 [2024-12-16 19:33:56.707736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:12.517 [2024-12-16 19:33:56.707742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:12.517 [2024-12-16 19:33:56.707747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:12.517 [2024-12-16 19:33:56.707757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:12.517 [2024-12-16 19:33:56.707762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:12.517 [2024-12-16 19:33:56.707767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:12.517 [2024-12-16 19:33:56.707772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:12.517 [2024-12-16 19:33:56.707777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:12.517 [2024-12-16 19:33:56.707782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:12.517 [2024-12-16 19:33:56.707788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:12.517 [2024-12-16 19:33:56.707793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:12.517 [2024-12-16 19:33:56.707798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:12.517 [2024-12-16 19:33:56.707802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:12.517 [2024-12-16 19:33:56.707807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:12.517 [2024-12-16 19:33:56.707812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:12.517 [2024-12-16 19:33:56.707817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:12.517 [2024-12-16 19:33:56.707822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:12.517 [2024-12-16 19:33:56.707827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:12.517 [2024-12-16 19:33:56.707832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:12.517 [2024-12-16 19:33:56.707837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:12.517 [2024-12-16 19:33:56.707843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:12.517 [2024-12-16 19:33:56.707848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:12.517 [2024-12-16 19:33:56.707853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:12.517 [2024-12-16 19:33:56.707857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:12.517 [2024-12-16 19:33:56.707862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:12.517 [2024-12-16 19:33:56.707867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:12.517 [2024-12-16 19:33:56.707872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:12.517 [2024-12-16 19:33:56.707877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:12.517 [2024-12-16 19:33:56.707881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:12.517 [2024-12-16 19:33:56.707886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:12.517 [2024-12-16 19:33:56.707892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:12.517 [2024-12-16 19:33:56.707896] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:12.517 [2024-12-16 19:33:56.707904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:12.517 [2024-12-16 19:33:56.707910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:12.517 [2024-12-16 19:33:56.707917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:12.517 [2024-12-16 19:33:56.707923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:12.517 [2024-12-16 19:33:56.707928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:12.517 [2024-12-16 19:33:56.707933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:12.517 [2024-12-16 19:33:56.707938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:12.517 [2024-12-16 19:33:56.707943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:12.517 [2024-12-16 19:33:56.707948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:12.517 [2024-12-16 19:33:56.707954] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:12.517 [2024-12-16 19:33:56.707961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:12.517 [2024-12-16 19:33:56.707967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:12.517 [2024-12-16 19:33:56.707972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:12.517 [2024-12-16 19:33:56.707977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:12.517 [2024-12-16 19:33:56.707982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:12.517 [2024-12-16 19:33:56.707988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:12.517 [2024-12-16 19:33:56.707993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:12.517 [2024-12-16 19:33:56.707998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:12.517 [2024-12-16 19:33:56.708004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:12.517 [2024-12-16 19:33:56.708009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:12.517 [2024-12-16 19:33:56.708014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:12.517 [2024-12-16 19:33:56.708020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:12.517 [2024-12-16 19:33:56.708025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:12.517 [2024-12-16 19:33:56.708030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:12.517 [2024-12-16 19:33:56.708035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:12.517 [2024-12-16 19:33:56.708041] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:12.517 [2024-12-16 19:33:56.708046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:12.517 [2024-12-16 19:33:56.708053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:12.517 [2024-12-16 19:33:56.708059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:12.517 [2024-12-16 19:33:56.708065] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:12.517 [2024-12-16 19:33:56.708071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:12.517 [2024-12-16 19:33:56.708076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.517 [2024-12-16 19:33:56.708082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:12.517 [2024-12-16 19:33:56.708088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:32:12.517 [2024-12-16 19:33:56.708093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.517 [2024-12-16 19:33:56.726354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.517 [2024-12-16 19:33:56.726379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:12.517 [2024-12-16 19:33:56.726387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.231 ms 00:32:12.517 [2024-12-16 19:33:56.726392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.517 [2024-12-16 19:33:56.726454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.517 [2024-12-16 19:33:56.726462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:12.517 [2024-12-16 19:33:56.726469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:32:12.517 [2024-12-16 19:33:56.726474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.517 [2024-12-16 19:33:56.763634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.517 [2024-12-16 19:33:56.763666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:12.517 [2024-12-16 19:33:56.763675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.124 ms 00:32:12.517 [2024-12-16 19:33:56.763685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.517 [2024-12-16 19:33:56.763716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.517 [2024-12-16 19:33:56.763724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:12.517 [2024-12-16 19:33:56.763731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:12.517 [2024-12-16 19:33:56.763736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.517 [2024-12-16 19:33:56.763807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.517 [2024-12-16 19:33:56.763815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:12.517 [2024-12-16 19:33:56.763821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:32:12.517 [2024-12-16 19:33:56.763827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.517 [2024-12-16 19:33:56.763915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.517 [2024-12-16 19:33:56.763922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:12.517 [2024-12-16 19:33:56.763927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:32:12.517 [2024-12-16 19:33:56.763933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.517 [2024-12-16 19:33:56.774342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.517 [2024-12-16 19:33:56.774368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:12.517 [2024-12-16 19:33:56.774376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.396 ms 00:32:12.518 [2024-12-16 19:33:56.774382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.518 [2024-12-16 19:33:56.774465] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:12.518 [2024-12-16 19:33:56.774475] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:12.518 [2024-12-16 19:33:56.774484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.518 [2024-12-16 19:33:56.774489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:12.518 [2024-12-16 19:33:56.774495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:32:12.518 [2024-12-16 19:33:56.774500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.518 [2024-12-16 19:33:56.783733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.518 [2024-12-16 19:33:56.783840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:12.518 [2024-12-16 19:33:56.783853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.222 ms 00:32:12.518 [2024-12-16 19:33:56.783859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.518 [2024-12-16 19:33:56.783945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.518 [2024-12-16 19:33:56.783952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:12.518 [2024-12-16 19:33:56.783962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:32:12.518 [2024-12-16 19:33:56.783967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.518 [2024-12-16 19:33:56.783992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.518 [2024-12-16 19:33:56.783999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:12.518 [2024-12-16 19:33:56.784010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:12.518 [2024-12-16 19:33:56.784016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.518 [2024-12-16 19:33:56.784459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.518 [2024-12-16 19:33:56.784469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:12.518 [2024-12-16 19:33:56.784475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:32:12.518 [2024-12-16 19:33:56.784482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.518 [2024-12-16 19:33:56.784494] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:12.518 [2024-12-16 19:33:56.784501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.518 [2024-12-16 19:33:56.784507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:12.518 [2024-12-16 19:33:56.784512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:12.518 [2024-12-16 19:33:56.784518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.518 [2024-12-16 19:33:56.793008] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:12.518 [2024-12-16 19:33:56.793112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.518 [2024-12-16 19:33:56.793121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:12.518 [2024-12-16 19:33:56.793127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.571 ms 00:32:12.518 [2024-12-16 19:33:56.793133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.518 [2024-12-16 19:33:56.794693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.518 [2024-12-16 19:33:56.794783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:12.518 [2024-12-16 19:33:56.794795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.546 ms 00:32:12.518 [2024-12-16 19:33:56.794801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.518 [2024-12-16 19:33:56.794852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.518 [2024-12-16 19:33:56.794860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:12.518 [2024-12-16 19:33:56.794866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:32:12.518 [2024-12-16 19:33:56.794872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.518 [2024-12-16 19:33:56.794910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.518 [2024-12-16 19:33:56.794917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:12.518 [2024-12-16 19:33:56.794923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:12.518 [2024-12-16 19:33:56.794928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.518 [2024-12-16 19:33:56.794948] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:12.518 [2024-12-16 19:33:56.794955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.518 [2024-12-16 19:33:56.794961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:12.518 [2024-12-16 19:33:56.794966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:12.518 [2024-12-16 19:33:56.794972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.518 [2024-12-16 19:33:56.812799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.518 [2024-12-16 19:33:56.812825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:12.518 [2024-12-16 19:33:56.812834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.812 ms 00:32:12.518 [2024-12-16 19:33:56.812840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.518 [2024-12-16 19:33:56.812890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.518 [2024-12-16 19:33:56.812898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:12.518 [2024-12-16 19:33:56.812904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:32:12.518 [2024-12-16 19:33:56.812910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.518 [2024-12-16 19:33:56.813599] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 110.399 ms, result 0 00:32:13.904  [2024-12-16T19:33:59.201Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-16T19:34:00.144Z] Copying: 29/1024 [MB] (15 MBps) [2024-12-16T19:34:01.085Z] Copying: 40/1024 [MB] (10 MBps) [2024-12-16T19:34:02.025Z] Copying: 51/1024 [MB] (11 MBps) [2024-12-16T19:34:02.968Z] Copying: 72/1024 [MB] (20 MBps) [2024-12-16T19:34:04.354Z] Copying: 83/1024 [MB] (11 MBps) [2024-12-16T19:34:05.302Z] Copying: 96/1024 [MB] (12 MBps) [2024-12-16T19:34:06.241Z] Copying: 119/1024 [MB] (22 MBps) [2024-12-16T19:34:07.188Z] Copying: 138/1024 [MB] (19 MBps) [2024-12-16T19:34:08.187Z] Copying: 159/1024 [MB] (21 MBps) [2024-12-16T19:34:09.130Z] Copying: 171/1024 [MB] (12 MBps) [2024-12-16T19:34:10.074Z] Copying: 184/1024 [MB] (12 MBps) [2024-12-16T19:34:11.017Z] Copying: 197/1024 [MB] (12 MBps) [2024-12-16T19:34:11.961Z] Copying: 208/1024 [MB] (10 MBps) [2024-12-16T19:34:13.345Z] Copying: 230/1024 [MB] (22 MBps) [2024-12-16T19:34:14.289Z] Copying: 249/1024 [MB] (18 MBps) [2024-12-16T19:34:15.231Z] Copying: 271/1024 [MB] (22 MBps) [2024-12-16T19:34:16.173Z] Copying: 291/1024 [MB] (20 MBps) [2024-12-16T19:34:17.116Z] Copying: 303/1024 [MB] (11 MBps) [2024-12-16T19:34:18.060Z] Copying: 314/1024 [MB] (10 MBps) [2024-12-16T19:34:19.002Z] Copying: 330/1024 [MB] (16 MBps) [2024-12-16T19:34:20.386Z] Copying: 353/1024 [MB] (23 MBps) [2024-12-16T19:34:20.958Z] Copying: 371/1024 [MB] (17 MBps) [2024-12-16T19:34:22.342Z] Copying: 391/1024 [MB] (20 MBps) [2024-12-16T19:34:23.284Z] Copying: 412/1024 [MB] (21 MBps) [2024-12-16T19:34:24.226Z] Copying: 434/1024 [MB] (21 MBps) [2024-12-16T19:34:25.169Z] Copying: 455/1024 [MB] (21 MBps) [2024-12-16T19:34:26.110Z] Copying: 473/1024 [MB] (17 MBps) [2024-12-16T19:34:27.053Z] Copying: 499/1024 [MB] (25 MBps) [2024-12-16T19:34:27.996Z] Copying: 510/1024 [MB] (10 MBps) [2024-12-16T19:34:29.379Z] Copying: 532/1024 [MB] (22 MBps) [2024-12-16T19:34:30.322Z] Copying: 556/1024 [MB] (24 MBps) [2024-12-16T19:34:30.959Z] Copying: 581/1024 [MB] (24 MBps) [2024-12-16T19:34:32.344Z] Copying: 603/1024 [MB] (22 MBps) [2024-12-16T19:34:33.287Z] Copying: 618/1024 [MB] (14 MBps) [2024-12-16T19:34:34.230Z] Copying: 652/1024 [MB] (34 MBps) [2024-12-16T19:34:35.174Z] Copying: 676/1024 [MB] (23 MBps) [2024-12-16T19:34:36.116Z] Copying: 686/1024 [MB] (10 MBps) [2024-12-16T19:34:37.060Z] Copying: 697/1024 [MB] (10 MBps) [2024-12-16T19:34:38.003Z] Copying: 708/1024 [MB] (11 MBps) [2024-12-16T19:34:39.386Z] Copying: 718/1024 [MB] (10 MBps) [2024-12-16T19:34:39.959Z] Copying: 736/1024 [MB] (17 MBps) [2024-12-16T19:34:41.342Z] Copying: 757/1024 [MB] (20 MBps) [2024-12-16T19:34:42.286Z] Copying: 778/1024 [MB] (21 MBps) [2024-12-16T19:34:43.230Z] Copying: 794/1024 [MB] (15 MBps) [2024-12-16T19:34:44.174Z] Copying: 813/1024 [MB] (18 MBps) [2024-12-16T19:34:45.118Z] Copying: 831/1024 [MB] (18 MBps) [2024-12-16T19:34:46.063Z] Copying: 852/1024 [MB] (20 MBps) [2024-12-16T19:34:47.007Z] Copying: 864/1024 [MB] (12 MBps) [2024-12-16T19:34:48.394Z] Copying: 887/1024 [MB] (22 MBps) [2024-12-16T19:34:48.967Z] Copying: 902/1024 [MB] (15 MBps) [2024-12-16T19:34:50.353Z] Copying: 921/1024 [MB] (19 MBps) [2024-12-16T19:34:51.298Z] Copying: 932/1024 [MB] (10 MBps) [2024-12-16T19:34:52.241Z] Copying: 951/1024 [MB] (18 MBps) [2024-12-16T19:34:53.184Z] Copying: 967/1024 [MB] (15 MBps) [2024-12-16T19:34:54.170Z] Copying: 986/1024 [MB] (19 MBps) [2024-12-16T19:34:55.114Z] Copying: 997/1024 [MB] (10 MBps) [2024-12-16T19:34:55.685Z] Copying: 1016/1024 [MB] (19 MBps) [2024-12-16T19:34:55.948Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-16 19:34:55.730369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.594 [2024-12-16 19:34:55.730458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:11.594 [2024-12-16 19:34:55.730475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:11.594 [2024-12-16 19:34:55.730485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.594 [2024-12-16 19:34:55.730516] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:11.594 [2024-12-16 19:34:55.734078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.594 [2024-12-16 19:34:55.734281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:11.594 [2024-12-16 19:34:55.734306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.544 ms 00:33:11.594 [2024-12-16 19:34:55.734316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.594 [2024-12-16 19:34:55.734606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.594 [2024-12-16 19:34:55.734618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:11.594 [2024-12-16 19:34:55.734627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:33:11.594 [2024-12-16 19:34:55.734635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.594 [2024-12-16 19:34:55.734671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.594 [2024-12-16 19:34:55.734680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:11.594 [2024-12-16 19:34:55.734690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:11.594 [2024-12-16 19:34:55.734698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.594 [2024-12-16 19:34:55.734757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.594 [2024-12-16 19:34:55.734765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:11.594 [2024-12-16 19:34:55.734774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:33:11.594 [2024-12-16 19:34:55.734782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.594 [2024-12-16 19:34:55.734797] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:11.594 [2024-12-16 19:34:55.734812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.734822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.734830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.734838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.734846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.734853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.734861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.734869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.734877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.734885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.734892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.734900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.734908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.734916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.734925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.734933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.734940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.734949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.734958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.734966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.734973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.734981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.734988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.734996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:11.594 [2024-12-16 19:34:55.735348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:11.595 [2024-12-16 19:34:55.735649] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:11.595 [2024-12-16 19:34:55.735657] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6c279ede-0f39-4354-bc49-0382ad208b2d 00:33:11.595 [2024-12-16 19:34:55.735666] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:11.595 [2024-12-16 19:34:55.735674] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:11.595 [2024-12-16 19:34:55.735681] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:11.595 [2024-12-16 19:34:55.735689] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:11.595 [2024-12-16 19:34:55.735696] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:11.595 [2024-12-16 19:34:55.735704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:11.595 [2024-12-16 19:34:55.735714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:11.595 [2024-12-16 19:34:55.735721] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:11.595 [2024-12-16 19:34:55.735727] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:11.595 [2024-12-16 19:34:55.735734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.595 [2024-12-16 19:34:55.735741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:11.595 [2024-12-16 19:34:55.735749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.938 ms 00:33:11.595 [2024-12-16 19:34:55.735759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.595 [2024-12-16 19:34:55.750599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.595 [2024-12-16 19:34:55.750635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:11.595 [2024-12-16 19:34:55.750647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.822 ms 00:33:11.595 [2024-12-16 19:34:55.750656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.595 [2024-12-16 19:34:55.751042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.595 [2024-12-16 19:34:55.751058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:11.595 [2024-12-16 19:34:55.751067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:33:11.595 [2024-12-16 19:34:55.751074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.595 [2024-12-16 19:34:55.787377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.595 [2024-12-16 19:34:55.787411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:11.595 [2024-12-16 19:34:55.787423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.595 [2024-12-16 19:34:55.787433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.595 [2024-12-16 19:34:55.787508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.595 [2024-12-16 19:34:55.787522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:11.595 [2024-12-16 19:34:55.787532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.595 [2024-12-16 19:34:55.787541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.595 [2024-12-16 19:34:55.787603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.595 [2024-12-16 19:34:55.787619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:11.595 [2024-12-16 19:34:55.787629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.595 [2024-12-16 19:34:55.787638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.595 [2024-12-16 19:34:55.787656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.595 [2024-12-16 19:34:55.787665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:11.595 [2024-12-16 19:34:55.787677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.595 [2024-12-16 19:34:55.787686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.595 [2024-12-16 19:34:55.870842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.595 [2024-12-16 19:34:55.870891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:11.595 [2024-12-16 19:34:55.870904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.595 [2024-12-16 19:34:55.870913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.595 [2024-12-16 19:34:55.939717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.595 [2024-12-16 19:34:55.939761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:11.595 [2024-12-16 19:34:55.939779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.595 [2024-12-16 19:34:55.939788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.595 [2024-12-16 19:34:55.939872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.595 [2024-12-16 19:34:55.939883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:11.595 [2024-12-16 19:34:55.939891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.595 [2024-12-16 19:34:55.939904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.595 [2024-12-16 19:34:55.939940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.595 [2024-12-16 19:34:55.939949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:11.595 [2024-12-16 19:34:55.939958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.595 [2024-12-16 19:34:55.939968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.595 [2024-12-16 19:34:55.940050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.595 [2024-12-16 19:34:55.940061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:11.595 [2024-12-16 19:34:55.940069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.595 [2024-12-16 19:34:55.940077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.595 [2024-12-16 19:34:55.940108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.595 [2024-12-16 19:34:55.940117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:11.595 [2024-12-16 19:34:55.940126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.595 [2024-12-16 19:34:55.940134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.595 [2024-12-16 19:34:55.940211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.595 [2024-12-16 19:34:55.940227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:11.595 [2024-12-16 19:34:55.940235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.595 [2024-12-16 19:34:55.940243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.595 [2024-12-16 19:34:55.940289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.595 [2024-12-16 19:34:55.940298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:11.595 [2024-12-16 19:34:55.940306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.595 [2024-12-16 19:34:55.940314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.596 [2024-12-16 19:34:55.940452] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 210.052 ms, result 0 00:33:12.537 00:33:12.537 00:33:12.537 19:34:56 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:15.082 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:15.082 19:34:58 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:33:15.082 [2024-12-16 19:34:58.998475] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:33:15.082 [2024-12-16 19:34:58.998651] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86938 ] 00:33:15.082 [2024-12-16 19:34:59.161774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.082 [2024-12-16 19:34:59.286007] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.344 [2024-12-16 19:34:59.581572] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:15.344 [2024-12-16 19:34:59.581872] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:15.606 [2024-12-16 19:34:59.742943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.606 [2024-12-16 19:34:59.743007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:15.606 [2024-12-16 19:34:59.743023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:15.606 [2024-12-16 19:34:59.743032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.606 [2024-12-16 19:34:59.743087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.606 [2024-12-16 19:34:59.743100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:15.606 [2024-12-16 19:34:59.743109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:33:15.606 [2024-12-16 19:34:59.743118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.606 [2024-12-16 19:34:59.743138] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:15.606 [2024-12-16 19:34:59.743885] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:15.606 [2024-12-16 19:34:59.743918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.606 [2024-12-16 19:34:59.743927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:15.606 [2024-12-16 19:34:59.743936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:33:15.606 [2024-12-16 19:34:59.743944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.606 [2024-12-16 19:34:59.744273] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:15.606 [2024-12-16 19:34:59.744306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.606 [2024-12-16 19:34:59.744318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:15.606 [2024-12-16 19:34:59.744328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:33:15.606 [2024-12-16 19:34:59.744336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.606 [2024-12-16 19:34:59.744390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.606 [2024-12-16 19:34:59.744400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:15.606 [2024-12-16 19:34:59.744408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:33:15.606 [2024-12-16 19:34:59.744415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.606 [2024-12-16 19:34:59.744681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.606 [2024-12-16 19:34:59.744698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:15.606 [2024-12-16 19:34:59.744707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:33:15.606 [2024-12-16 19:34:59.744715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.606 [2024-12-16 19:34:59.744784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.606 [2024-12-16 19:34:59.744793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:15.606 [2024-12-16 19:34:59.744802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:33:15.606 [2024-12-16 19:34:59.744810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.606 [2024-12-16 19:34:59.744832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.606 [2024-12-16 19:34:59.744841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:15.606 [2024-12-16 19:34:59.744852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:15.606 [2024-12-16 19:34:59.744860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.606 [2024-12-16 19:34:59.744879] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:15.606 [2024-12-16 19:34:59.749077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.606 [2024-12-16 19:34:59.749117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:15.606 [2024-12-16 19:34:59.749127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.202 ms 00:33:15.606 [2024-12-16 19:34:59.749135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.606 [2024-12-16 19:34:59.749200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.606 [2024-12-16 19:34:59.749210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:15.606 [2024-12-16 19:34:59.749218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:33:15.606 [2024-12-16 19:34:59.749225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.606 [2024-12-16 19:34:59.749278] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:15.606 [2024-12-16 19:34:59.749302] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:15.606 [2024-12-16 19:34:59.749340] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:15.606 [2024-12-16 19:34:59.749356] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:15.606 [2024-12-16 19:34:59.749460] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:15.606 [2024-12-16 19:34:59.749472] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:15.606 [2024-12-16 19:34:59.749482] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:15.606 [2024-12-16 19:34:59.749492] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:15.606 [2024-12-16 19:34:59.749501] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:15.606 [2024-12-16 19:34:59.749513] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:15.606 [2024-12-16 19:34:59.749520] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:15.606 [2024-12-16 19:34:59.749528] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:15.606 [2024-12-16 19:34:59.749536] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:15.606 [2024-12-16 19:34:59.749545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.607 [2024-12-16 19:34:59.749553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:15.607 [2024-12-16 19:34:59.749561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:33:15.607 [2024-12-16 19:34:59.749568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.607 [2024-12-16 19:34:59.749653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.607 [2024-12-16 19:34:59.749662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:15.607 [2024-12-16 19:34:59.749670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:33:15.607 [2024-12-16 19:34:59.749679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.607 [2024-12-16 19:34:59.749775] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:15.607 [2024-12-16 19:34:59.749786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:15.607 [2024-12-16 19:34:59.749795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:15.607 [2024-12-16 19:34:59.749802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:15.607 [2024-12-16 19:34:59.749810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:15.607 [2024-12-16 19:34:59.749816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:15.607 [2024-12-16 19:34:59.749823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:15.607 [2024-12-16 19:34:59.749832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:15.607 [2024-12-16 19:34:59.749839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:15.607 [2024-12-16 19:34:59.749846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:15.607 [2024-12-16 19:34:59.749853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:15.607 [2024-12-16 19:34:59.749864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:15.607 [2024-12-16 19:34:59.749871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:15.607 [2024-12-16 19:34:59.749878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:15.607 [2024-12-16 19:34:59.749886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:15.607 [2024-12-16 19:34:59.749899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:15.607 [2024-12-16 19:34:59.749906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:15.607 [2024-12-16 19:34:59.749913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:15.607 [2024-12-16 19:34:59.749921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:15.607 [2024-12-16 19:34:59.749929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:15.607 [2024-12-16 19:34:59.749935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:15.607 [2024-12-16 19:34:59.749942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:15.607 [2024-12-16 19:34:59.749949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:15.607 [2024-12-16 19:34:59.749955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:15.607 [2024-12-16 19:34:59.749961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:15.607 [2024-12-16 19:34:59.749968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:15.607 [2024-12-16 19:34:59.749974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:15.607 [2024-12-16 19:34:59.749981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:15.607 [2024-12-16 19:34:59.749987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:15.607 [2024-12-16 19:34:59.749995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:15.607 [2024-12-16 19:34:59.750001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:15.607 [2024-12-16 19:34:59.750008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:15.607 [2024-12-16 19:34:59.750014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:15.607 [2024-12-16 19:34:59.750021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:15.607 [2024-12-16 19:34:59.750027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:15.607 [2024-12-16 19:34:59.750034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:15.607 [2024-12-16 19:34:59.750041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:15.607 [2024-12-16 19:34:59.750047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:15.607 [2024-12-16 19:34:59.750053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:15.607 [2024-12-16 19:34:59.750060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:15.607 [2024-12-16 19:34:59.750066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:15.607 [2024-12-16 19:34:59.750074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:15.607 [2024-12-16 19:34:59.750082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:15.607 [2024-12-16 19:34:59.750091] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:15.607 [2024-12-16 19:34:59.750099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:15.607 [2024-12-16 19:34:59.750106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:15.607 [2024-12-16 19:34:59.750114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:15.607 [2024-12-16 19:34:59.750124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:15.607 [2024-12-16 19:34:59.750131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:15.607 [2024-12-16 19:34:59.750138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:15.607 [2024-12-16 19:34:59.750145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:15.607 [2024-12-16 19:34:59.750152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:15.607 [2024-12-16 19:34:59.750159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:15.607 [2024-12-16 19:34:59.750167] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:15.607 [2024-12-16 19:34:59.750192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:15.607 [2024-12-16 19:34:59.750201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:15.607 [2024-12-16 19:34:59.750209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:15.607 [2024-12-16 19:34:59.750217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:15.607 [2024-12-16 19:34:59.750224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:15.607 [2024-12-16 19:34:59.750232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:15.607 [2024-12-16 19:34:59.750239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:15.607 [2024-12-16 19:34:59.750247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:15.607 [2024-12-16 19:34:59.750254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:15.607 [2024-12-16 19:34:59.750262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:15.607 [2024-12-16 19:34:59.750269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:15.607 [2024-12-16 19:34:59.750276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:15.607 [2024-12-16 19:34:59.750284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:15.607 [2024-12-16 19:34:59.750292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:15.607 [2024-12-16 19:34:59.750299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:15.607 [2024-12-16 19:34:59.750307] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:15.607 [2024-12-16 19:34:59.750315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:15.607 [2024-12-16 19:34:59.750324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:15.607 [2024-12-16 19:34:59.750332] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:15.607 [2024-12-16 19:34:59.750340] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:15.607 [2024-12-16 19:34:59.750347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:15.607 [2024-12-16 19:34:59.750356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.607 [2024-12-16 19:34:59.750363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:15.607 [2024-12-16 19:34:59.750372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.649 ms 00:33:15.607 [2024-12-16 19:34:59.750380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.607 [2024-12-16 19:34:59.777610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.607 [2024-12-16 19:34:59.777654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:15.607 [2024-12-16 19:34:59.777666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.189 ms 00:33:15.607 [2024-12-16 19:34:59.777674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.607 [2024-12-16 19:34:59.777758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.607 [2024-12-16 19:34:59.777767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:15.607 [2024-12-16 19:34:59.777780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:33:15.607 [2024-12-16 19:34:59.777788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.607 [2024-12-16 19:34:59.822941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.607 [2024-12-16 19:34:59.822992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:15.608 [2024-12-16 19:34:59.823005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.098 ms 00:33:15.608 [2024-12-16 19:34:59.823013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.608 [2024-12-16 19:34:59.823067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.608 [2024-12-16 19:34:59.823077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:15.608 [2024-12-16 19:34:59.823086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:15.608 [2024-12-16 19:34:59.823094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.608 [2024-12-16 19:34:59.823228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.608 [2024-12-16 19:34:59.823240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:15.608 [2024-12-16 19:34:59.823250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:33:15.608 [2024-12-16 19:34:59.823257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.608 [2024-12-16 19:34:59.823384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.608 [2024-12-16 19:34:59.823397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:15.608 [2024-12-16 19:34:59.823406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:33:15.608 [2024-12-16 19:34:59.823414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.608 [2024-12-16 19:34:59.838837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.608 [2024-12-16 19:34:59.838882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:15.608 [2024-12-16 19:34:59.838894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.403 ms 00:33:15.608 [2024-12-16 19:34:59.838902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.608 [2024-12-16 19:34:59.839052] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:15.608 [2024-12-16 19:34:59.839066] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:15.608 [2024-12-16 19:34:59.839077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.608 [2024-12-16 19:34:59.839088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:15.608 [2024-12-16 19:34:59.839097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:33:15.608 [2024-12-16 19:34:59.839104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.608 [2024-12-16 19:34:59.851584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.608 [2024-12-16 19:34:59.851624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:15.608 [2024-12-16 19:34:59.851634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.463 ms 00:33:15.608 [2024-12-16 19:34:59.851642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.608 [2024-12-16 19:34:59.851771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.608 [2024-12-16 19:34:59.851780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:15.608 [2024-12-16 19:34:59.851791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:33:15.608 [2024-12-16 19:34:59.851804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.608 [2024-12-16 19:34:59.851854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.608 [2024-12-16 19:34:59.851864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:15.608 [2024-12-16 19:34:59.851881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:33:15.608 [2024-12-16 19:34:59.851888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.608 [2024-12-16 19:34:59.852484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.608 [2024-12-16 19:34:59.852506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:15.608 [2024-12-16 19:34:59.852516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:33:15.608 [2024-12-16 19:34:59.852524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.608 [2024-12-16 19:34:59.852546] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:15.608 [2024-12-16 19:34:59.852556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.608 [2024-12-16 19:34:59.852564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:15.608 [2024-12-16 19:34:59.852573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:15.608 [2024-12-16 19:34:59.852581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.608 [2024-12-16 19:34:59.864991] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:15.608 [2024-12-16 19:34:59.865327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.608 [2024-12-16 19:34:59.865345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:15.608 [2024-12-16 19:34:59.865357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.727 ms 00:33:15.608 [2024-12-16 19:34:59.865364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.608 [2024-12-16 19:34:59.867594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.608 [2024-12-16 19:34:59.867626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:15.608 [2024-12-16 19:34:59.867636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.204 ms 00:33:15.608 [2024-12-16 19:34:59.867643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.608 [2024-12-16 19:34:59.867737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.608 [2024-12-16 19:34:59.867747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:15.608 [2024-12-16 19:34:59.867756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:33:15.608 [2024-12-16 19:34:59.867764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.608 [2024-12-16 19:34:59.867788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.608 [2024-12-16 19:34:59.867802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:15.608 [2024-12-16 19:34:59.867811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:15.608 [2024-12-16 19:34:59.867818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.608 [2024-12-16 19:34:59.867850] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:15.608 [2024-12-16 19:34:59.867860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.608 [2024-12-16 19:34:59.867868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:15.608 [2024-12-16 19:34:59.867876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:15.608 [2024-12-16 19:34:59.867883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.608 [2024-12-16 19:34:59.894274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.608 [2024-12-16 19:34:59.894324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:15.608 [2024-12-16 19:34:59.894337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.370 ms 00:33:15.608 [2024-12-16 19:34:59.894345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.608 [2024-12-16 19:34:59.894429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.608 [2024-12-16 19:34:59.894440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:15.608 [2024-12-16 19:34:59.894449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:33:15.608 [2024-12-16 19:34:59.894457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.608 [2024-12-16 19:34:59.895663] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 152.213 ms, result 0 00:33:16.994  [2024-12-16T19:35:01.922Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-16T19:35:03.308Z] Copying: 29/1024 [MB] (10 MBps) [2024-12-16T19:35:04.252Z] Copying: 39/1024 [MB] (10 MBps) [2024-12-16T19:35:05.195Z] Copying: 50/1024 [MB] (10 MBps) [2024-12-16T19:35:06.223Z] Copying: 70/1024 [MB] (19 MBps) [2024-12-16T19:35:07.168Z] Copying: 103/1024 [MB] (33 MBps) [2024-12-16T19:35:08.112Z] Copying: 117/1024 [MB] (13 MBps) [2024-12-16T19:35:09.053Z] Copying: 134/1024 [MB] (17 MBps) [2024-12-16T19:35:09.997Z] Copying: 146/1024 [MB] (12 MBps) [2024-12-16T19:35:10.940Z] Copying: 162/1024 [MB] (15 MBps) [2024-12-16T19:35:12.327Z] Copying: 178/1024 [MB] (15 MBps) [2024-12-16T19:35:13.269Z] Copying: 196/1024 [MB] (18 MBps) [2024-12-16T19:35:14.212Z] Copying: 209/1024 [MB] (12 MBps) [2024-12-16T19:35:15.156Z] Copying: 223/1024 [MB] (14 MBps) [2024-12-16T19:35:16.099Z] Copying: 240/1024 [MB] (16 MBps) [2024-12-16T19:35:17.043Z] Copying: 259/1024 [MB] (19 MBps) [2024-12-16T19:35:17.987Z] Copying: 276/1024 [MB] (17 MBps) [2024-12-16T19:35:18.931Z] Copying: 294/1024 [MB] (17 MBps) [2024-12-16T19:35:20.318Z] Copying: 315/1024 [MB] (21 MBps) [2024-12-16T19:35:21.261Z] Copying: 332/1024 [MB] (17 MBps) [2024-12-16T19:35:22.205Z] Copying: 347/1024 [MB] (14 MBps) [2024-12-16T19:35:23.145Z] Copying: 364/1024 [MB] (16 MBps) [2024-12-16T19:35:24.089Z] Copying: 378/1024 [MB] (13 MBps) [2024-12-16T19:35:25.032Z] Copying: 396/1024 [MB] (17 MBps) [2024-12-16T19:35:25.974Z] Copying: 408/1024 [MB] (12 MBps) [2024-12-16T19:35:26.917Z] Copying: 425/1024 [MB] (17 MBps) [2024-12-16T19:35:28.305Z] Copying: 440/1024 [MB] (14 MBps) [2024-12-16T19:35:29.249Z] Copying: 458/1024 [MB] (18 MBps) [2024-12-16T19:35:30.194Z] Copying: 469/1024 [MB] (10 MBps) [2024-12-16T19:35:31.137Z] Copying: 479/1024 [MB] (10 MBps) [2024-12-16T19:35:32.080Z] Copying: 495/1024 [MB] (15 MBps) [2024-12-16T19:35:33.021Z] Copying: 520/1024 [MB] (25 MBps) [2024-12-16T19:35:33.963Z] Copying: 530/1024 [MB] (10 MBps) [2024-12-16T19:35:35.350Z] Copying: 540/1024 [MB] (10 MBps) [2024-12-16T19:35:35.921Z] Copying: 550/1024 [MB] (10 MBps) [2024-12-16T19:35:37.309Z] Copying: 575/1024 [MB] (25 MBps) [2024-12-16T19:35:38.253Z] Copying: 589/1024 [MB] (13 MBps) [2024-12-16T19:35:39.196Z] Copying: 599/1024 [MB] (10 MBps) [2024-12-16T19:35:40.138Z] Copying: 609/1024 [MB] (10 MBps) [2024-12-16T19:35:41.082Z] Copying: 620/1024 [MB] (10 MBps) [2024-12-16T19:35:42.027Z] Copying: 645288/1048576 [kB] (10084 kBps) [2024-12-16T19:35:42.971Z] Copying: 640/1024 [MB] (10 MBps) [2024-12-16T19:35:43.916Z] Copying: 651/1024 [MB] (10 MBps) [2024-12-16T19:35:45.304Z] Copying: 662/1024 [MB] (10 MBps) [2024-12-16T19:35:46.326Z] Copying: 675/1024 [MB] (13 MBps) [2024-12-16T19:35:47.277Z] Copying: 686/1024 [MB] (10 MBps) [2024-12-16T19:35:48.221Z] Copying: 709/1024 [MB] (23 MBps) [2024-12-16T19:35:49.164Z] Copying: 723/1024 [MB] (13 MBps) [2024-12-16T19:35:50.108Z] Copying: 736/1024 [MB] (13 MBps) [2024-12-16T19:35:51.053Z] Copying: 749/1024 [MB] (12 MBps) [2024-12-16T19:35:51.997Z] Copying: 768/1024 [MB] (18 MBps) [2024-12-16T19:35:52.942Z] Copying: 785/1024 [MB] (16 MBps) [2024-12-16T19:35:54.334Z] Copying: 796/1024 [MB] (11 MBps) [2024-12-16T19:35:55.277Z] Copying: 810/1024 [MB] (13 MBps) [2024-12-16T19:35:56.221Z] Copying: 821/1024 [MB] (11 MBps) [2024-12-16T19:35:57.166Z] Copying: 841/1024 [MB] (19 MBps) [2024-12-16T19:35:58.110Z] Copying: 856/1024 [MB] (14 MBps) [2024-12-16T19:35:59.054Z] Copying: 874/1024 [MB] (18 MBps) [2024-12-16T19:35:59.998Z] Copying: 885/1024 [MB] (11 MBps) [2024-12-16T19:36:00.942Z] Copying: 905/1024 [MB] (19 MBps) [2024-12-16T19:36:02.327Z] Copying: 915/1024 [MB] (10 MBps) [2024-12-16T19:36:03.272Z] Copying: 926/1024 [MB] (10 MBps) [2024-12-16T19:36:04.216Z] Copying: 936/1024 [MB] (10 MBps) [2024-12-16T19:36:05.160Z] Copying: 953/1024 [MB] (16 MBps) [2024-12-16T19:36:06.100Z] Copying: 963/1024 [MB] (10 MBps) [2024-12-16T19:36:07.043Z] Copying: 974/1024 [MB] (10 MBps) [2024-12-16T19:36:07.985Z] Copying: 984/1024 [MB] (10 MBps) [2024-12-16T19:36:08.928Z] Copying: 999/1024 [MB] (15 MBps) [2024-12-16T19:36:09.879Z] Copying: 1023/1024 [MB] (23 MBps) [2024-12-16T19:36:09.879Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-16 19:36:09.739534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.525 [2024-12-16 19:36:09.739580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:25.525 [2024-12-16 19:36:09.739592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:25.525 [2024-12-16 19:36:09.739599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.525 [2024-12-16 19:36:09.741823] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:25.525 [2024-12-16 19:36:09.745087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.525 [2024-12-16 19:36:09.745203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:25.525 [2024-12-16 19:36:09.745218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.148 ms 00:34:25.525 [2024-12-16 19:36:09.745225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.525 [2024-12-16 19:36:09.753390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.525 [2024-12-16 19:36:09.753485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:25.525 [2024-12-16 19:36:09.753536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.488 ms 00:34:25.525 [2024-12-16 19:36:09.753554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.525 [2024-12-16 19:36:09.753588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.525 [2024-12-16 19:36:09.753638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:25.525 [2024-12-16 19:36:09.753657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:25.525 [2024-12-16 19:36:09.753672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.525 [2024-12-16 19:36:09.753745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.525 [2024-12-16 19:36:09.753767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:25.525 [2024-12-16 19:36:09.753783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:34:25.525 [2024-12-16 19:36:09.753894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.525 [2024-12-16 19:36:09.753946] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:25.525 [2024-12-16 19:36:09.753965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 127488 / 261120 wr_cnt: 1 state: open 00:34:25.525 [2024-12-16 19:36:09.753990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.754996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.755018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.755041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.755064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.755086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.755134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.755158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.755190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.755214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.755236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.755259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.755319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.755362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.755387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.755410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.755432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.755456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:25.525 [2024-12-16 19:36:09.755495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.755527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.755550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.755573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.755595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.755617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.755659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.755801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.755824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.755848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.755870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.755893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.755940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.755963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.755986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.756983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.757028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.757068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.757093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.757116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.757138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.757160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.757239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.757267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.757292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:25.526 [2024-12-16 19:36:09.757321] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:25.526 [2024-12-16 19:36:09.757336] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6c279ede-0f39-4354-bc49-0382ad208b2d 00:34:25.526 [2024-12-16 19:36:09.757359] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 127488 00:34:25.526 [2024-12-16 19:36:09.757373] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 127520 00:34:25.526 [2024-12-16 19:36:09.757389] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 127488 00:34:25.526 [2024-12-16 19:36:09.757404] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:34:25.526 [2024-12-16 19:36:09.757423] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:25.526 [2024-12-16 19:36:09.757437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:25.526 [2024-12-16 19:36:09.757479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:25.526 [2024-12-16 19:36:09.757511] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:25.526 [2024-12-16 19:36:09.757527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:25.526 [2024-12-16 19:36:09.757555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.526 [2024-12-16 19:36:09.757571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:25.526 [2024-12-16 19:36:09.757588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.610 ms 00:34:25.526 [2024-12-16 19:36:09.757603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.526 [2024-12-16 19:36:09.767245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.526 [2024-12-16 19:36:09.767334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:25.526 [2024-12-16 19:36:09.767379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.619 ms 00:34:25.526 [2024-12-16 19:36:09.767396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.526 [2024-12-16 19:36:09.767691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.526 [2024-12-16 19:36:09.767755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:25.526 [2024-12-16 19:36:09.767797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:34:25.526 [2024-12-16 19:36:09.767814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.526 [2024-12-16 19:36:09.793621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.526 [2024-12-16 19:36:09.793719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:25.526 [2024-12-16 19:36:09.793759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.526 [2024-12-16 19:36:09.793776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.526 [2024-12-16 19:36:09.793827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.526 [2024-12-16 19:36:09.793843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:25.526 [2024-12-16 19:36:09.793858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.526 [2024-12-16 19:36:09.793872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.526 [2024-12-16 19:36:09.793919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.526 [2024-12-16 19:36:09.793938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:25.526 [2024-12-16 19:36:09.793957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.526 [2024-12-16 19:36:09.793974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.526 [2024-12-16 19:36:09.793994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.526 [2024-12-16 19:36:09.794010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:25.526 [2024-12-16 19:36:09.794025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.526 [2024-12-16 19:36:09.794039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.526 [2024-12-16 19:36:09.852983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.526 [2024-12-16 19:36:09.853093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:25.526 [2024-12-16 19:36:09.853132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.526 [2024-12-16 19:36:09.853150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.817 [2024-12-16 19:36:09.901209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.817 [2024-12-16 19:36:09.901317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:25.817 [2024-12-16 19:36:09.901356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.817 [2024-12-16 19:36:09.901373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.817 [2024-12-16 19:36:09.901425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.818 [2024-12-16 19:36:09.901442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:25.818 [2024-12-16 19:36:09.901457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.818 [2024-12-16 19:36:09.901476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.818 [2024-12-16 19:36:09.901528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.818 [2024-12-16 19:36:09.901546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:25.818 [2024-12-16 19:36:09.901562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.818 [2024-12-16 19:36:09.901600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.818 [2024-12-16 19:36:09.901694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.818 [2024-12-16 19:36:09.901714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:25.818 [2024-12-16 19:36:09.901752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.818 [2024-12-16 19:36:09.901768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.818 [2024-12-16 19:36:09.901804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.818 [2024-12-16 19:36:09.901821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:25.818 [2024-12-16 19:36:09.901862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.818 [2024-12-16 19:36:09.901879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.818 [2024-12-16 19:36:09.901915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.818 [2024-12-16 19:36:09.901931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:25.818 [2024-12-16 19:36:09.901946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.818 [2024-12-16 19:36:09.901983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.818 [2024-12-16 19:36:09.902032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.818 [2024-12-16 19:36:09.902162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:25.818 [2024-12-16 19:36:09.902186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.818 [2024-12-16 19:36:09.902193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.818 [2024-12-16 19:36:09.902291] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 164.576 ms, result 0 00:34:27.211 00:34:27.211 00:34:27.211 19:36:11 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:34:27.211 [2024-12-16 19:36:11.274756] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:34:27.211 [2024-12-16 19:36:11.275025] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87652 ] 00:34:27.211 [2024-12-16 19:36:11.431905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:27.211 [2024-12-16 19:36:11.511012] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.473 [2024-12-16 19:36:11.720627] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:27.473 [2024-12-16 19:36:11.720676] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:27.736 [2024-12-16 19:36:11.871694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.736 [2024-12-16 19:36:11.871731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:27.736 [2024-12-16 19:36:11.871741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:27.736 [2024-12-16 19:36:11.871747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.736 [2024-12-16 19:36:11.871781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.736 [2024-12-16 19:36:11.871790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:27.736 [2024-12-16 19:36:11.871796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:34:27.736 [2024-12-16 19:36:11.871802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.736 [2024-12-16 19:36:11.871814] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:27.736 [2024-12-16 19:36:11.872371] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:27.736 [2024-12-16 19:36:11.872383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.736 [2024-12-16 19:36:11.872389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:27.736 [2024-12-16 19:36:11.872395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:34:27.736 [2024-12-16 19:36:11.872400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.736 [2024-12-16 19:36:11.872588] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:27.736 [2024-12-16 19:36:11.872605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.736 [2024-12-16 19:36:11.872613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:27.736 [2024-12-16 19:36:11.872619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:34:27.736 [2024-12-16 19:36:11.872625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.736 [2024-12-16 19:36:11.872680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.736 [2024-12-16 19:36:11.872687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:27.736 [2024-12-16 19:36:11.872694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:34:27.736 [2024-12-16 19:36:11.872699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.736 [2024-12-16 19:36:11.872892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.736 [2024-12-16 19:36:11.872900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:27.736 [2024-12-16 19:36:11.872907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:34:27.736 [2024-12-16 19:36:11.872912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.736 [2024-12-16 19:36:11.872959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.736 [2024-12-16 19:36:11.872966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:27.736 [2024-12-16 19:36:11.872972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:34:27.736 [2024-12-16 19:36:11.872977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.736 [2024-12-16 19:36:11.872992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.736 [2024-12-16 19:36:11.872998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:27.736 [2024-12-16 19:36:11.873006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:27.736 [2024-12-16 19:36:11.873012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.736 [2024-12-16 19:36:11.873024] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:27.736 [2024-12-16 19:36:11.875992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.736 [2024-12-16 19:36:11.876086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:27.736 [2024-12-16 19:36:11.876136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.971 ms 00:34:27.736 [2024-12-16 19:36:11.876154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.736 [2024-12-16 19:36:11.876200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.736 [2024-12-16 19:36:11.876253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:27.736 [2024-12-16 19:36:11.876271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:27.736 [2024-12-16 19:36:11.876287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.736 [2024-12-16 19:36:11.876366] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:27.736 [2024-12-16 19:36:11.876397] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:27.736 [2024-12-16 19:36:11.876443] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:27.736 [2024-12-16 19:36:11.876612] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:27.736 [2024-12-16 19:36:11.876707] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:27.736 [2024-12-16 19:36:11.876733] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:27.736 [2024-12-16 19:36:11.876757] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:27.736 [2024-12-16 19:36:11.876819] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:27.736 [2024-12-16 19:36:11.876843] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:27.736 [2024-12-16 19:36:11.876869] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:27.736 [2024-12-16 19:36:11.876883] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:27.736 [2024-12-16 19:36:11.876897] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:27.736 [2024-12-16 19:36:11.876930] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:27.736 [2024-12-16 19:36:11.876967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.736 [2024-12-16 19:36:11.876984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:27.736 [2024-12-16 19:36:11.877014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:34:27.736 [2024-12-16 19:36:11.877030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.736 [2024-12-16 19:36:11.877108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.736 [2024-12-16 19:36:11.877124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:27.736 [2024-12-16 19:36:11.877189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:34:27.736 [2024-12-16 19:36:11.877210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.736 [2024-12-16 19:36:11.877294] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:27.736 [2024-12-16 19:36:11.877312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:27.736 [2024-12-16 19:36:11.877354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:27.736 [2024-12-16 19:36:11.877372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:27.736 [2024-12-16 19:36:11.877386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:27.736 [2024-12-16 19:36:11.877400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:27.736 [2024-12-16 19:36:11.877440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:27.736 [2024-12-16 19:36:11.877457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:27.736 [2024-12-16 19:36:11.877471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:27.736 [2024-12-16 19:36:11.877486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:27.736 [2024-12-16 19:36:11.877500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:27.736 [2024-12-16 19:36:11.877538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:27.736 [2024-12-16 19:36:11.877555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:27.736 [2024-12-16 19:36:11.877569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:27.736 [2024-12-16 19:36:11.877583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:27.736 [2024-12-16 19:36:11.877603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:27.736 [2024-12-16 19:36:11.877634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:27.736 [2024-12-16 19:36:11.877650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:27.736 [2024-12-16 19:36:11.877695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:27.736 [2024-12-16 19:36:11.877712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:27.736 [2024-12-16 19:36:11.877726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:27.736 [2024-12-16 19:36:11.877757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:27.736 [2024-12-16 19:36:11.877772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:27.736 [2024-12-16 19:36:11.877786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:27.736 [2024-12-16 19:36:11.877800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:27.736 [2024-12-16 19:36:11.877814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:27.736 [2024-12-16 19:36:11.877828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:27.736 [2024-12-16 19:36:11.877864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:27.737 [2024-12-16 19:36:11.877880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:27.737 [2024-12-16 19:36:11.877894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:27.737 [2024-12-16 19:36:11.877908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:27.737 [2024-12-16 19:36:11.877922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:27.737 [2024-12-16 19:36:11.877936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:27.737 [2024-12-16 19:36:11.877969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:27.737 [2024-12-16 19:36:11.877985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:27.737 [2024-12-16 19:36:11.877999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:27.737 [2024-12-16 19:36:11.878014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:27.737 [2024-12-16 19:36:11.878082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:27.737 [2024-12-16 19:36:11.878096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:27.737 [2024-12-16 19:36:11.878110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:27.737 [2024-12-16 19:36:11.878144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:27.737 [2024-12-16 19:36:11.878161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:27.737 [2024-12-16 19:36:11.878191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:27.737 [2024-12-16 19:36:11.878209] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:27.737 [2024-12-16 19:36:11.878225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:27.737 [2024-12-16 19:36:11.878239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:27.737 [2024-12-16 19:36:11.878254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:27.737 [2024-12-16 19:36:11.878272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:27.737 [2024-12-16 19:36:11.878286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:27.737 [2024-12-16 19:36:11.878300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:27.737 [2024-12-16 19:36:11.878345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:27.737 [2024-12-16 19:36:11.878361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:27.737 [2024-12-16 19:36:11.878384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:27.737 [2024-12-16 19:36:11.878399] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:27.737 [2024-12-16 19:36:11.878423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:27.737 [2024-12-16 19:36:11.878446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:27.737 [2024-12-16 19:36:11.878468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:27.737 [2024-12-16 19:36:11.878490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:27.737 [2024-12-16 19:36:11.878568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:27.737 [2024-12-16 19:36:11.878589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:27.737 [2024-12-16 19:36:11.878611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:27.737 [2024-12-16 19:36:11.878632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:27.737 [2024-12-16 19:36:11.878654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:27.737 [2024-12-16 19:36:11.878700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:27.737 [2024-12-16 19:36:11.878724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:27.737 [2024-12-16 19:36:11.878746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:27.737 [2024-12-16 19:36:11.878841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:27.737 [2024-12-16 19:36:11.878864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:27.737 [2024-12-16 19:36:11.878886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:27.737 [2024-12-16 19:36:11.878908] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:27.737 [2024-12-16 19:36:11.878930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:27.737 [2024-12-16 19:36:11.878953] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:27.737 [2024-12-16 19:36:11.878975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:27.737 [2024-12-16 19:36:11.879041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:27.737 [2024-12-16 19:36:11.879064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:27.737 [2024-12-16 19:36:11.879087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.737 [2024-12-16 19:36:11.879102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:27.737 [2024-12-16 19:36:11.879123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.845 ms 00:34:27.737 [2024-12-16 19:36:11.879137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.737 [2024-12-16 19:36:11.897621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.737 [2024-12-16 19:36:11.897719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:27.737 [2024-12-16 19:36:11.897762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.406 ms 00:34:27.737 [2024-12-16 19:36:11.897796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.737 [2024-12-16 19:36:11.897869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.737 [2024-12-16 19:36:11.898241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:27.737 [2024-12-16 19:36:11.898593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:34:27.737 [2024-12-16 19:36:11.898636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.737 [2024-12-16 19:36:11.941195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.737 [2024-12-16 19:36:11.941233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:27.737 [2024-12-16 19:36:11.941245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.365 ms 00:34:27.737 [2024-12-16 19:36:11.941253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.737 [2024-12-16 19:36:11.941299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.737 [2024-12-16 19:36:11.941309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:27.737 [2024-12-16 19:36:11.941318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:27.737 [2024-12-16 19:36:11.941324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.737 [2024-12-16 19:36:11.941411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.737 [2024-12-16 19:36:11.941422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:27.737 [2024-12-16 19:36:11.941430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:34:27.737 [2024-12-16 19:36:11.941438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.737 [2024-12-16 19:36:11.941549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.737 [2024-12-16 19:36:11.941560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:27.737 [2024-12-16 19:36:11.941568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:34:27.737 [2024-12-16 19:36:11.941575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.737 [2024-12-16 19:36:11.954680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.737 [2024-12-16 19:36:11.954805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:27.737 [2024-12-16 19:36:11.954821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.088 ms 00:34:27.737 [2024-12-16 19:36:11.954828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.737 [2024-12-16 19:36:11.954940] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:34:27.737 [2024-12-16 19:36:11.954952] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:27.737 [2024-12-16 19:36:11.954962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.737 [2024-12-16 19:36:11.954971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:27.737 [2024-12-16 19:36:11.954980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:34:27.737 [2024-12-16 19:36:11.954987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.737 [2024-12-16 19:36:11.967390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.737 [2024-12-16 19:36:11.967417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:27.737 [2024-12-16 19:36:11.967428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.389 ms 00:34:27.737 [2024-12-16 19:36:11.967436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.737 [2024-12-16 19:36:11.967547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.737 [2024-12-16 19:36:11.967555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:27.737 [2024-12-16 19:36:11.967563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:34:27.738 [2024-12-16 19:36:11.967573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.738 [2024-12-16 19:36:11.967632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.738 [2024-12-16 19:36:11.967643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:27.738 [2024-12-16 19:36:11.967650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:34:27.738 [2024-12-16 19:36:11.967663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.738 [2024-12-16 19:36:11.968235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.738 [2024-12-16 19:36:11.968248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:27.738 [2024-12-16 19:36:11.968256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:34:27.738 [2024-12-16 19:36:11.968263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.738 [2024-12-16 19:36:11.968282] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:27.738 [2024-12-16 19:36:11.968292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.738 [2024-12-16 19:36:11.968299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:27.738 [2024-12-16 19:36:11.968307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:34:27.738 [2024-12-16 19:36:11.968314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.738 [2024-12-16 19:36:11.979357] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:27.738 [2024-12-16 19:36:11.979497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.738 [2024-12-16 19:36:11.979506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:27.738 [2024-12-16 19:36:11.979515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.166 ms 00:34:27.738 [2024-12-16 19:36:11.979522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.738 [2024-12-16 19:36:11.981642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.738 [2024-12-16 19:36:11.981746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:27.738 [2024-12-16 19:36:11.981760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.101 ms 00:34:27.738 [2024-12-16 19:36:11.981768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.738 [2024-12-16 19:36:11.981832] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:34:27.738 [2024-12-16 19:36:11.982284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.738 [2024-12-16 19:36:11.982294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:27.738 [2024-12-16 19:36:11.982302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:34:27.738 [2024-12-16 19:36:11.982309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.738 [2024-12-16 19:36:11.982333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.738 [2024-12-16 19:36:11.982341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:27.738 [2024-12-16 19:36:11.982349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:27.738 [2024-12-16 19:36:11.982355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.738 [2024-12-16 19:36:11.982399] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:27.738 [2024-12-16 19:36:11.982409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.738 [2024-12-16 19:36:11.982417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:27.738 [2024-12-16 19:36:11.982424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:34:27.738 [2024-12-16 19:36:11.982431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.738 [2024-12-16 19:36:12.006646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.738 [2024-12-16 19:36:12.006680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:27.738 [2024-12-16 19:36:12.006690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.198 ms 00:34:27.738 [2024-12-16 19:36:12.006697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.738 [2024-12-16 19:36:12.006760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.738 [2024-12-16 19:36:12.006769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:27.738 [2024-12-16 19:36:12.006777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:34:27.738 [2024-12-16 19:36:12.006783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.738 [2024-12-16 19:36:12.007654] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 135.576 ms, result 0 00:34:29.126  [2024-12-16T19:36:14.422Z] Copying: 20/1024 [MB] (20 MBps) [2024-12-16T19:36:15.366Z] Copying: 37/1024 [MB] (17 MBps) [2024-12-16T19:36:16.311Z] Copying: 60/1024 [MB] (22 MBps) [2024-12-16T19:36:17.255Z] Copying: 76/1024 [MB] (16 MBps) [2024-12-16T19:36:18.198Z] Copying: 95/1024 [MB] (19 MBps) [2024-12-16T19:36:19.584Z] Copying: 119/1024 [MB] (23 MBps) [2024-12-16T19:36:20.529Z] Copying: 138/1024 [MB] (18 MBps) [2024-12-16T19:36:21.474Z] Copying: 156/1024 [MB] (18 MBps) [2024-12-16T19:36:22.417Z] Copying: 181/1024 [MB] (25 MBps) [2024-12-16T19:36:23.361Z] Copying: 204/1024 [MB] (22 MBps) [2024-12-16T19:36:24.306Z] Copying: 225/1024 [MB] (21 MBps) [2024-12-16T19:36:25.251Z] Copying: 242/1024 [MB] (17 MBps) [2024-12-16T19:36:26.637Z] Copying: 263/1024 [MB] (20 MBps) [2024-12-16T19:36:27.210Z] Copying: 274/1024 [MB] (10 MBps) [2024-12-16T19:36:28.601Z] Copying: 285/1024 [MB] (10 MBps) [2024-12-16T19:36:29.546Z] Copying: 298/1024 [MB] (13 MBps) [2024-12-16T19:36:30.491Z] Copying: 312/1024 [MB] (13 MBps) [2024-12-16T19:36:31.437Z] Copying: 322/1024 [MB] (10 MBps) [2024-12-16T19:36:32.383Z] Copying: 334/1024 [MB] (11 MBps) [2024-12-16T19:36:33.327Z] Copying: 345/1024 [MB] (10 MBps) [2024-12-16T19:36:34.272Z] Copying: 357/1024 [MB] (11 MBps) [2024-12-16T19:36:35.216Z] Copying: 367/1024 [MB] (10 MBps) [2024-12-16T19:36:36.601Z] Copying: 384/1024 [MB] (16 MBps) [2024-12-16T19:36:37.546Z] Copying: 397/1024 [MB] (13 MBps) [2024-12-16T19:36:38.492Z] Copying: 417/1024 [MB] (19 MBps) [2024-12-16T19:36:39.436Z] Copying: 429/1024 [MB] (11 MBps) [2024-12-16T19:36:40.382Z] Copying: 441/1024 [MB] (12 MBps) [2024-12-16T19:36:41.388Z] Copying: 455/1024 [MB] (14 MBps) [2024-12-16T19:36:42.331Z] Copying: 476/1024 [MB] (21 MBps) [2024-12-16T19:36:43.272Z] Copying: 500/1024 [MB] (23 MBps) [2024-12-16T19:36:44.213Z] Copying: 520/1024 [MB] (20 MBps) [2024-12-16T19:36:45.598Z] Copying: 542/1024 [MB] (21 MBps) [2024-12-16T19:36:46.542Z] Copying: 562/1024 [MB] (20 MBps) [2024-12-16T19:36:47.486Z] Copying: 582/1024 [MB] (19 MBps) [2024-12-16T19:36:48.429Z] Copying: 598/1024 [MB] (16 MBps) [2024-12-16T19:36:49.371Z] Copying: 612/1024 [MB] (14 MBps) [2024-12-16T19:36:50.315Z] Copying: 638/1024 [MB] (25 MBps) [2024-12-16T19:36:51.259Z] Copying: 653/1024 [MB] (14 MBps) [2024-12-16T19:36:52.203Z] Copying: 674/1024 [MB] (20 MBps) [2024-12-16T19:36:53.590Z] Copying: 693/1024 [MB] (18 MBps) [2024-12-16T19:36:54.533Z] Copying: 714/1024 [MB] (21 MBps) [2024-12-16T19:36:55.474Z] Copying: 733/1024 [MB] (19 MBps) [2024-12-16T19:36:56.415Z] Copying: 745/1024 [MB] (11 MBps) [2024-12-16T19:36:57.358Z] Copying: 762/1024 [MB] (17 MBps) [2024-12-16T19:36:58.300Z] Copying: 773/1024 [MB] (10 MBps) [2024-12-16T19:36:59.244Z] Copying: 790/1024 [MB] (16 MBps) [2024-12-16T19:37:00.629Z] Copying: 800/1024 [MB] (10 MBps) [2024-12-16T19:37:01.201Z] Copying: 812/1024 [MB] (11 MBps) [2024-12-16T19:37:02.587Z] Copying: 826/1024 [MB] (14 MBps) [2024-12-16T19:37:03.529Z] Copying: 844/1024 [MB] (17 MBps) [2024-12-16T19:37:04.473Z] Copying: 861/1024 [MB] (16 MBps) [2024-12-16T19:37:05.416Z] Copying: 878/1024 [MB] (16 MBps) [2024-12-16T19:37:06.358Z] Copying: 893/1024 [MB] (15 MBps) [2024-12-16T19:37:07.300Z] Copying: 914/1024 [MB] (21 MBps) [2024-12-16T19:37:08.243Z] Copying: 934/1024 [MB] (19 MBps) [2024-12-16T19:37:09.238Z] Copying: 957/1024 [MB] (23 MBps) [2024-12-16T19:37:10.203Z] Copying: 968/1024 [MB] (10 MBps) [2024-12-16T19:37:11.589Z] Copying: 978/1024 [MB] (10 MBps) [2024-12-16T19:37:12.532Z] Copying: 989/1024 [MB] (10 MBps) [2024-12-16T19:37:13.474Z] Copying: 999/1024 [MB] (10 MBps) [2024-12-16T19:37:14.419Z] Copying: 1009/1024 [MB] (10 MBps) [2024-12-16T19:37:14.419Z] Copying: 1021/1024 [MB] (11 MBps) [2024-12-16T19:37:14.419Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-16 19:37:14.405168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.065 [2024-12-16 19:37:14.405258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:30.065 [2024-12-16 19:37:14.405275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:30.065 [2024-12-16 19:37:14.405284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.065 [2024-12-16 19:37:14.405306] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:30.065 [2024-12-16 19:37:14.408599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.065 [2024-12-16 19:37:14.408648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:30.065 [2024-12-16 19:37:14.408660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.274 ms 00:35:30.065 [2024-12-16 19:37:14.408675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.065 [2024-12-16 19:37:14.408914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.065 [2024-12-16 19:37:14.408926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:30.065 [2024-12-16 19:37:14.408935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:35:30.065 [2024-12-16 19:37:14.408943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.065 [2024-12-16 19:37:14.408972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.065 [2024-12-16 19:37:14.408981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:30.065 [2024-12-16 19:37:14.408989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:30.065 [2024-12-16 19:37:14.408997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.065 [2024-12-16 19:37:14.409055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.065 [2024-12-16 19:37:14.409067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:30.065 [2024-12-16 19:37:14.409075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:35:30.065 [2024-12-16 19:37:14.409083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.065 [2024-12-16 19:37:14.409096] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:30.065 [2024-12-16 19:37:14.409109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:35:30.065 [2024-12-16 19:37:14.409119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:30.065 [2024-12-16 19:37:14.409534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:30.066 [2024-12-16 19:37:14.409902] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:30.066 [2024-12-16 19:37:14.409910] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6c279ede-0f39-4354-bc49-0382ad208b2d 00:35:30.066 [2024-12-16 19:37:14.409917] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:35:30.066 [2024-12-16 19:37:14.409925] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3616 00:35:30.066 [2024-12-16 19:37:14.409933] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3584 00:35:30.066 [2024-12-16 19:37:14.409941] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:35:30.066 [2024-12-16 19:37:14.409951] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:30.066 [2024-12-16 19:37:14.409960] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:30.066 [2024-12-16 19:37:14.409966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:30.066 [2024-12-16 19:37:14.409973] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:30.066 [2024-12-16 19:37:14.409980] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:30.066 [2024-12-16 19:37:14.409987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.066 [2024-12-16 19:37:14.409995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:30.066 [2024-12-16 19:37:14.410003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:35:30.066 [2024-12-16 19:37:14.410012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.327 [2024-12-16 19:37:14.424035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.327 [2024-12-16 19:37:14.424259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:30.327 [2024-12-16 19:37:14.424289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.008 ms 00:35:30.327 [2024-12-16 19:37:14.424297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.327 [2024-12-16 19:37:14.424687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.327 [2024-12-16 19:37:14.424704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:30.327 [2024-12-16 19:37:14.424713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:35:30.327 [2024-12-16 19:37:14.424721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.327 [2024-12-16 19:37:14.461532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:30.327 [2024-12-16 19:37:14.461584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:30.327 [2024-12-16 19:37:14.461596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:30.327 [2024-12-16 19:37:14.461605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.327 [2024-12-16 19:37:14.461679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:30.327 [2024-12-16 19:37:14.461689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:30.327 [2024-12-16 19:37:14.461699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:30.327 [2024-12-16 19:37:14.461709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.327 [2024-12-16 19:37:14.461765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:30.327 [2024-12-16 19:37:14.461780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:30.327 [2024-12-16 19:37:14.461789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:30.327 [2024-12-16 19:37:14.461797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.327 [2024-12-16 19:37:14.461813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:30.327 [2024-12-16 19:37:14.461823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:30.327 [2024-12-16 19:37:14.461833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:30.327 [2024-12-16 19:37:14.461842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.327 [2024-12-16 19:37:14.548055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:30.327 [2024-12-16 19:37:14.548315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:30.327 [2024-12-16 19:37:14.548340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:30.327 [2024-12-16 19:37:14.548350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.328 [2024-12-16 19:37:14.617851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:30.328 [2024-12-16 19:37:14.617911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:30.328 [2024-12-16 19:37:14.617924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:30.328 [2024-12-16 19:37:14.617933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.328 [2024-12-16 19:37:14.618017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:30.328 [2024-12-16 19:37:14.618028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:30.328 [2024-12-16 19:37:14.618044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:30.328 [2024-12-16 19:37:14.618052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.328 [2024-12-16 19:37:14.618091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:30.328 [2024-12-16 19:37:14.618101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:30.328 [2024-12-16 19:37:14.618111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:30.328 [2024-12-16 19:37:14.618119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.328 [2024-12-16 19:37:14.618239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:30.328 [2024-12-16 19:37:14.618251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:30.328 [2024-12-16 19:37:14.618260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:30.328 [2024-12-16 19:37:14.618273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.328 [2024-12-16 19:37:14.618304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:30.328 [2024-12-16 19:37:14.618314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:30.328 [2024-12-16 19:37:14.618322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:30.328 [2024-12-16 19:37:14.618331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.328 [2024-12-16 19:37:14.618369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:30.328 [2024-12-16 19:37:14.618379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:30.328 [2024-12-16 19:37:14.618387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:30.328 [2024-12-16 19:37:14.618398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.328 [2024-12-16 19:37:14.618446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:30.328 [2024-12-16 19:37:14.618457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:30.328 [2024-12-16 19:37:14.618466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:30.328 [2024-12-16 19:37:14.618474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.328 [2024-12-16 19:37:14.618609] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 213.410 ms, result 0 00:35:31.270 00:35:31.270 00:35:31.270 19:37:15 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:33.817 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:35:33.817 19:37:17 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:35:33.817 19:37:17 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:35:33.817 19:37:17 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:35:33.817 19:37:17 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:33.817 19:37:17 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:33.817 19:37:17 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 85492 00:35:33.817 19:37:17 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 85492 ']' 00:35:33.817 19:37:17 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 85492 00:35:33.817 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (85492) - No such process 00:35:33.817 Process with pid 85492 is not found 00:35:33.817 19:37:17 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 85492 is not found' 00:35:33.817 19:37:17 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:35:33.817 Remove shared memory files 00:35:33.817 19:37:17 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:33.817 19:37:17 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:35:33.817 19:37:17 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_6c279ede-0f39-4354-bc49-0382ad208b2d_band_md /dev/hugepages/ftl_6c279ede-0f39-4354-bc49-0382ad208b2d_l2p_l1 /dev/hugepages/ftl_6c279ede-0f39-4354-bc49-0382ad208b2d_l2p_l2 /dev/hugepages/ftl_6c279ede-0f39-4354-bc49-0382ad208b2d_l2p_l2_ctx /dev/hugepages/ftl_6c279ede-0f39-4354-bc49-0382ad208b2d_nvc_md /dev/hugepages/ftl_6c279ede-0f39-4354-bc49-0382ad208b2d_p2l_pool /dev/hugepages/ftl_6c279ede-0f39-4354-bc49-0382ad208b2d_sb /dev/hugepages/ftl_6c279ede-0f39-4354-bc49-0382ad208b2d_sb_shm /dev/hugepages/ftl_6c279ede-0f39-4354-bc49-0382ad208b2d_trim_bitmap /dev/hugepages/ftl_6c279ede-0f39-4354-bc49-0382ad208b2d_trim_log /dev/hugepages/ftl_6c279ede-0f39-4354-bc49-0382ad208b2d_trim_md /dev/hugepages/ftl_6c279ede-0f39-4354-bc49-0382ad208b2d_vmap 00:35:33.817 19:37:17 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:35:33.817 19:37:17 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:33.817 19:37:17 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:35:33.817 ************************************ 00:35:33.817 END TEST ftl_restore_fast 00:35:33.817 ************************************ 00:35:33.817 00:35:33.817 real 4m41.809s 00:35:33.818 user 4m29.054s 00:35:33.818 sys 0m12.542s 00:35:33.818 19:37:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:33.818 19:37:17 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:35:33.818 19:37:17 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:35:33.818 19:37:17 ftl -- ftl/ftl.sh@14 -- # killprocess 76822 00:35:33.818 Process with pid 76822 is not found 00:35:33.818 19:37:17 ftl -- common/autotest_common.sh@954 -- # '[' -z 76822 ']' 00:35:33.818 19:37:17 ftl -- common/autotest_common.sh@958 -- # kill -0 76822 00:35:33.818 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76822) - No such process 00:35:33.818 19:37:17 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76822 is not found' 00:35:33.818 19:37:17 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:35:33.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:33.818 19:37:17 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=88329 00:35:33.818 19:37:17 ftl -- ftl/ftl.sh@20 -- # waitforlisten 88329 00:35:33.818 19:37:17 ftl -- common/autotest_common.sh@835 -- # '[' -z 88329 ']' 00:35:33.818 19:37:17 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:33.818 19:37:17 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:33.818 19:37:17 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:33.818 19:37:17 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:33.818 19:37:17 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:33.818 19:37:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:33.818 [2024-12-16 19:37:17.904313] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:35:33.818 [2024-12-16 19:37:17.904462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88329 ] 00:35:33.818 [2024-12-16 19:37:18.065766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:34.079 [2024-12-16 19:37:18.190871] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:34.651 19:37:18 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:34.651 19:37:18 ftl -- common/autotest_common.sh@868 -- # return 0 00:35:34.651 19:37:18 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:35:34.912 nvme0n1 00:35:34.912 19:37:19 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:35:34.912 19:37:19 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:34.912 19:37:19 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:35:35.172 19:37:19 ftl -- ftl/common.sh@28 -- # stores=f6053ec0-8ab7-4721-8648-2747d5401920 00:35:35.173 19:37:19 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:35:35.173 19:37:19 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f6053ec0-8ab7-4721-8648-2747d5401920 00:35:35.433 19:37:19 ftl -- ftl/ftl.sh@23 -- # killprocess 88329 00:35:35.433 19:37:19 ftl -- common/autotest_common.sh@954 -- # '[' -z 88329 ']' 00:35:35.433 19:37:19 ftl -- common/autotest_common.sh@958 -- # kill -0 88329 00:35:35.433 19:37:19 ftl -- common/autotest_common.sh@959 -- # uname 00:35:35.434 19:37:19 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:35.434 19:37:19 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88329 00:35:35.434 killing process with pid 88329 00:35:35.434 19:37:19 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:35.434 19:37:19 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:35.434 19:37:19 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88329' 00:35:35.434 19:37:19 ftl -- common/autotest_common.sh@973 -- # kill 88329 00:35:35.434 19:37:19 ftl -- common/autotest_common.sh@978 -- # wait 88329 00:35:36.819 19:37:21 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:37.081 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:37.081 Waiting for block devices as requested 00:35:37.081 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:37.342 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:37.342 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:35:37.604 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:35:42.899 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:35:42.899 19:37:26 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:35:42.899 Remove shared memory files 00:35:42.899 19:37:26 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:42.899 19:37:26 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:35:42.899 19:37:26 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:35:42.899 19:37:26 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:35:42.899 19:37:26 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:42.899 19:37:26 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:35:42.899 ************************************ 00:35:42.899 END TEST ftl 00:35:42.899 ************************************ 00:35:42.899 00:35:42.899 real 17m31.075s 00:35:42.899 user 19m26.067s 00:35:42.899 sys 1m31.585s 00:35:42.899 19:37:26 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:42.899 19:37:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:42.899 19:37:26 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:42.899 19:37:26 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:42.899 19:37:26 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:42.899 19:37:26 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:42.899 19:37:26 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:42.899 19:37:26 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:42.899 19:37:26 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:42.899 19:37:26 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:42.899 19:37:26 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:42.899 19:37:26 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:42.899 19:37:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:42.899 19:37:26 -- common/autotest_common.sh@10 -- # set +x 00:35:42.899 19:37:26 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:42.899 19:37:26 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:42.899 19:37:26 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:42.899 19:37:26 -- common/autotest_common.sh@10 -- # set +x 00:35:44.284 INFO: APP EXITING 00:35:44.284 INFO: killing all VMs 00:35:44.284 INFO: killing vhost app 00:35:44.284 INFO: EXIT DONE 00:35:44.284 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:44.857 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:35:44.857 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:35:44.857 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:35:44.857 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:35:45.119 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:45.691 Cleaning 00:35:45.691 Removing: /var/run/dpdk/spdk0/config 00:35:45.691 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:45.691 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:45.691 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:45.691 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:45.691 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:45.691 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:45.691 Removing: /var/run/dpdk/spdk0 00:35:45.691 Removing: /var/run/dpdk/spdk_pid58778 00:35:45.691 Removing: /var/run/dpdk/spdk_pid58980 00:35:45.691 Removing: /var/run/dpdk/spdk_pid59187 00:35:45.691 Removing: /var/run/dpdk/spdk_pid59280 00:35:45.691 Removing: /var/run/dpdk/spdk_pid59320 00:35:45.691 Removing: /var/run/dpdk/spdk_pid59442 00:35:45.691 Removing: /var/run/dpdk/spdk_pid59455 00:35:45.691 Removing: /var/run/dpdk/spdk_pid59654 00:35:45.691 Removing: /var/run/dpdk/spdk_pid59740 00:35:45.691 Removing: /var/run/dpdk/spdk_pid59831 00:35:45.692 Removing: /var/run/dpdk/spdk_pid59936 00:35:45.692 Removing: /var/run/dpdk/spdk_pid60033 00:35:45.692 Removing: /var/run/dpdk/spdk_pid60073 00:35:45.692 Removing: /var/run/dpdk/spdk_pid60109 00:35:45.692 Removing: /var/run/dpdk/spdk_pid60180 00:35:45.692 Removing: /var/run/dpdk/spdk_pid60253 00:35:45.692 Removing: /var/run/dpdk/spdk_pid60689 00:35:45.692 Removing: /var/run/dpdk/spdk_pid60742 00:35:45.692 Removing: /var/run/dpdk/spdk_pid60794 00:35:45.692 Removing: /var/run/dpdk/spdk_pid60810 00:35:45.692 Removing: /var/run/dpdk/spdk_pid60901 00:35:45.692 Removing: /var/run/dpdk/spdk_pid60917 00:35:45.692 Removing: /var/run/dpdk/spdk_pid61014 00:35:45.692 Removing: /var/run/dpdk/spdk_pid61024 00:35:45.692 Removing: /var/run/dpdk/spdk_pid61082 00:35:45.692 Removing: /var/run/dpdk/spdk_pid61095 00:35:45.692 Removing: /var/run/dpdk/spdk_pid61148 00:35:45.692 Removing: /var/run/dpdk/spdk_pid61166 00:35:45.692 Removing: /var/run/dpdk/spdk_pid61325 00:35:45.692 Removing: /var/run/dpdk/spdk_pid61357 00:35:45.692 Removing: /var/run/dpdk/spdk_pid61446 00:35:45.692 Removing: /var/run/dpdk/spdk_pid61618 00:35:45.692 Removing: /var/run/dpdk/spdk_pid61702 00:35:45.692 Removing: /var/run/dpdk/spdk_pid61739 00:35:45.692 Removing: /var/run/dpdk/spdk_pid62161 00:35:45.692 Removing: /var/run/dpdk/spdk_pid62259 00:35:45.692 Removing: /var/run/dpdk/spdk_pid62381 00:35:45.692 Removing: /var/run/dpdk/spdk_pid62435 00:35:45.692 Removing: /var/run/dpdk/spdk_pid62455 00:35:45.692 Removing: /var/run/dpdk/spdk_pid62539 00:35:45.692 Removing: /var/run/dpdk/spdk_pid63156 00:35:45.692 Removing: /var/run/dpdk/spdk_pid63198 00:35:45.692 Removing: /var/run/dpdk/spdk_pid63663 00:35:45.692 Removing: /var/run/dpdk/spdk_pid63761 00:35:45.692 Removing: /var/run/dpdk/spdk_pid63881 00:35:45.692 Removing: /var/run/dpdk/spdk_pid63934 00:35:45.692 Removing: /var/run/dpdk/spdk_pid63954 00:35:45.692 Removing: /var/run/dpdk/spdk_pid63985 00:35:45.692 Removing: /var/run/dpdk/spdk_pid65821 00:35:45.692 Removing: /var/run/dpdk/spdk_pid65953 00:35:45.692 Removing: /var/run/dpdk/spdk_pid65962 00:35:45.692 Removing: /var/run/dpdk/spdk_pid65974 00:35:45.692 Removing: /var/run/dpdk/spdk_pid66014 00:35:45.692 Removing: /var/run/dpdk/spdk_pid66018 00:35:45.692 Removing: /var/run/dpdk/spdk_pid66030 00:35:45.692 Removing: /var/run/dpdk/spdk_pid66076 00:35:45.692 Removing: /var/run/dpdk/spdk_pid66080 00:35:45.692 Removing: /var/run/dpdk/spdk_pid66092 00:35:45.692 Removing: /var/run/dpdk/spdk_pid66137 00:35:45.692 Removing: /var/run/dpdk/spdk_pid66141 00:35:45.692 Removing: /var/run/dpdk/spdk_pid66153 00:35:45.692 Removing: /var/run/dpdk/spdk_pid67539 00:35:45.692 Removing: /var/run/dpdk/spdk_pid67637 00:35:45.692 Removing: /var/run/dpdk/spdk_pid69044 00:35:45.692 Removing: /var/run/dpdk/spdk_pid70790 00:35:45.692 Removing: /var/run/dpdk/spdk_pid70864 00:35:45.692 Removing: /var/run/dpdk/spdk_pid70954 00:35:45.692 Removing: /var/run/dpdk/spdk_pid71058 00:35:45.692 Removing: /var/run/dpdk/spdk_pid71155 00:35:45.692 Removing: /var/run/dpdk/spdk_pid71252 00:35:45.692 Removing: /var/run/dpdk/spdk_pid71326 00:35:45.692 Removing: /var/run/dpdk/spdk_pid71401 00:35:45.692 Removing: /var/run/dpdk/spdk_pid71511 00:35:45.692 Removing: /var/run/dpdk/spdk_pid71597 00:35:45.692 Removing: /var/run/dpdk/spdk_pid71698 00:35:45.692 Removing: /var/run/dpdk/spdk_pid71772 00:35:45.692 Removing: /var/run/dpdk/spdk_pid71853 00:35:45.692 Removing: /var/run/dpdk/spdk_pid71956 00:35:45.692 Removing: /var/run/dpdk/spdk_pid72049 00:35:45.692 Removing: /var/run/dpdk/spdk_pid72147 00:35:45.692 Removing: /var/run/dpdk/spdk_pid72221 00:35:45.692 Removing: /var/run/dpdk/spdk_pid72296 00:35:45.692 Removing: /var/run/dpdk/spdk_pid72404 00:35:45.692 Removing: /var/run/dpdk/spdk_pid72496 00:35:45.692 Removing: /var/run/dpdk/spdk_pid72594 00:35:45.692 Removing: /var/run/dpdk/spdk_pid72668 00:35:45.692 Removing: /var/run/dpdk/spdk_pid72742 00:35:45.692 Removing: /var/run/dpdk/spdk_pid72816 00:35:45.692 Removing: /var/run/dpdk/spdk_pid72885 00:35:45.692 Removing: /var/run/dpdk/spdk_pid72988 00:35:45.692 Removing: /var/run/dpdk/spdk_pid73079 00:35:45.954 Removing: /var/run/dpdk/spdk_pid73178 00:35:45.954 Removing: /var/run/dpdk/spdk_pid73248 00:35:45.954 Removing: /var/run/dpdk/spdk_pid73322 00:35:45.954 Removing: /var/run/dpdk/spdk_pid73396 00:35:45.954 Removing: /var/run/dpdk/spdk_pid73465 00:35:45.954 Removing: /var/run/dpdk/spdk_pid73574 00:35:45.954 Removing: /var/run/dpdk/spdk_pid73671 00:35:45.954 Removing: /var/run/dpdk/spdk_pid73810 00:35:45.954 Removing: /var/run/dpdk/spdk_pid74094 00:35:45.954 Removing: /var/run/dpdk/spdk_pid74131 00:35:45.954 Removing: /var/run/dpdk/spdk_pid74581 00:35:45.954 Removing: /var/run/dpdk/spdk_pid74781 00:35:45.954 Removing: /var/run/dpdk/spdk_pid74876 00:35:45.954 Removing: /var/run/dpdk/spdk_pid74981 00:35:45.954 Removing: /var/run/dpdk/spdk_pid75028 00:35:45.954 Removing: /var/run/dpdk/spdk_pid75054 00:35:45.954 Removing: /var/run/dpdk/spdk_pid75352 00:35:45.954 Removing: /var/run/dpdk/spdk_pid75412 00:35:45.954 Removing: /var/run/dpdk/spdk_pid75481 00:35:45.954 Removing: /var/run/dpdk/spdk_pid75871 00:35:45.954 Removing: /var/run/dpdk/spdk_pid76016 00:35:45.954 Removing: /var/run/dpdk/spdk_pid76822 00:35:45.954 Removing: /var/run/dpdk/spdk_pid76954 00:35:45.954 Removing: /var/run/dpdk/spdk_pid77108 00:35:45.954 Removing: /var/run/dpdk/spdk_pid77205 00:35:45.954 Removing: /var/run/dpdk/spdk_pid77497 00:35:45.954 Removing: /var/run/dpdk/spdk_pid77761 00:35:45.954 Removing: /var/run/dpdk/spdk_pid78113 00:35:45.954 Removing: /var/run/dpdk/spdk_pid78306 00:35:45.954 Removing: /var/run/dpdk/spdk_pid78464 00:35:45.954 Removing: /var/run/dpdk/spdk_pid78511 00:35:45.954 Removing: /var/run/dpdk/spdk_pid78717 00:35:45.954 Removing: /var/run/dpdk/spdk_pid78753 00:35:45.954 Removing: /var/run/dpdk/spdk_pid78806 00:35:45.954 Removing: /var/run/dpdk/spdk_pid79042 00:35:45.954 Removing: /var/run/dpdk/spdk_pid79278 00:35:45.954 Removing: /var/run/dpdk/spdk_pid79792 00:35:45.954 Removing: /var/run/dpdk/spdk_pid80553 00:35:45.954 Removing: /var/run/dpdk/spdk_pid81093 00:35:45.954 Removing: /var/run/dpdk/spdk_pid81896 00:35:45.954 Removing: /var/run/dpdk/spdk_pid82038 00:35:45.954 Removing: /var/run/dpdk/spdk_pid82125 00:35:45.954 Removing: /var/run/dpdk/spdk_pid82601 00:35:45.954 Removing: /var/run/dpdk/spdk_pid82655 00:35:45.954 Removing: /var/run/dpdk/spdk_pid83265 00:35:45.954 Removing: /var/run/dpdk/spdk_pid83656 00:35:45.954 Removing: /var/run/dpdk/spdk_pid84444 00:35:45.954 Removing: /var/run/dpdk/spdk_pid84570 00:35:45.954 Removing: /var/run/dpdk/spdk_pid84617 00:35:45.954 Removing: /var/run/dpdk/spdk_pid84673 00:35:45.954 Removing: /var/run/dpdk/spdk_pid84734 00:35:45.954 Removing: /var/run/dpdk/spdk_pid84804 00:35:45.954 Removing: /var/run/dpdk/spdk_pid84987 00:35:45.954 Removing: /var/run/dpdk/spdk_pid85067 00:35:45.954 Removing: /var/run/dpdk/spdk_pid85134 00:35:45.954 Removing: /var/run/dpdk/spdk_pid85201 00:35:45.954 Removing: /var/run/dpdk/spdk_pid85230 00:35:45.954 Removing: /var/run/dpdk/spdk_pid85319 00:35:45.954 Removing: /var/run/dpdk/spdk_pid85492 00:35:45.954 Removing: /var/run/dpdk/spdk_pid85714 00:35:45.954 Removing: /var/run/dpdk/spdk_pid86317 00:35:45.954 Removing: /var/run/dpdk/spdk_pid86938 00:35:45.954 Removing: /var/run/dpdk/spdk_pid87652 00:35:45.954 Removing: /var/run/dpdk/spdk_pid88329 00:35:45.954 Clean 00:35:45.954 19:37:30 -- common/autotest_common.sh@1453 -- # return 0 00:35:45.954 19:37:30 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:45.954 19:37:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:45.954 19:37:30 -- common/autotest_common.sh@10 -- # set +x 00:35:45.954 19:37:30 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:45.954 19:37:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:45.954 19:37:30 -- common/autotest_common.sh@10 -- # set +x 00:35:46.215 19:37:30 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:46.215 19:37:30 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:35:46.215 19:37:30 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:35:46.215 19:37:30 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:46.215 19:37:30 -- spdk/autotest.sh@398 -- # hostname 00:35:46.215 19:37:30 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:35:46.215 geninfo: WARNING: invalid characters removed from testname! 00:36:12.796 19:37:55 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:14.714 19:37:58 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:17.260 19:38:01 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:19.870 19:38:04 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:23.174 19:38:06 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:25.721 19:38:09 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:28.270 19:38:12 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:28.270 19:38:12 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:28.270 19:38:12 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:36:28.270 19:38:12 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:28.270 19:38:12 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:28.270 19:38:12 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:28.270 + [[ -n 5029 ]] 00:36:28.270 + sudo kill 5029 00:36:28.281 [Pipeline] } 00:36:28.297 [Pipeline] // timeout 00:36:28.302 [Pipeline] } 00:36:28.317 [Pipeline] // stage 00:36:28.322 [Pipeline] } 00:36:28.337 [Pipeline] // catchError 00:36:28.346 [Pipeline] stage 00:36:28.348 [Pipeline] { (Stop VM) 00:36:28.361 [Pipeline] sh 00:36:28.647 + vagrant halt 00:36:31.189 ==> default: Halting domain... 00:36:36.493 [Pipeline] sh 00:36:36.778 + vagrant destroy -f 00:36:39.325 ==> default: Removing domain... 00:36:40.281 [Pipeline] sh 00:36:40.567 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:36:40.577 [Pipeline] } 00:36:40.592 [Pipeline] // stage 00:36:40.597 [Pipeline] } 00:36:40.611 [Pipeline] // dir 00:36:40.617 [Pipeline] } 00:36:40.631 [Pipeline] // wrap 00:36:40.637 [Pipeline] } 00:36:40.649 [Pipeline] // catchError 00:36:40.659 [Pipeline] stage 00:36:40.661 [Pipeline] { (Epilogue) 00:36:40.673 [Pipeline] sh 00:36:40.958 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:46.254 [Pipeline] catchError 00:36:46.256 [Pipeline] { 00:36:46.270 [Pipeline] sh 00:36:46.555 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:46.555 Artifacts sizes are good 00:36:46.565 [Pipeline] } 00:36:46.579 [Pipeline] // catchError 00:36:46.590 [Pipeline] archiveArtifacts 00:36:46.597 Archiving artifacts 00:36:46.728 [Pipeline] cleanWs 00:36:46.765 [WS-CLEANUP] Deleting project workspace... 00:36:46.765 [WS-CLEANUP] Deferred wipeout is used... 00:36:46.796 [WS-CLEANUP] done 00:36:46.798 [Pipeline] } 00:36:46.813 [Pipeline] // stage 00:36:46.818 [Pipeline] } 00:36:46.831 [Pipeline] // node 00:36:46.835 [Pipeline] End of Pipeline 00:36:46.879 Finished: SUCCESS